Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
npsomaratna
on July 26, 2023
|
parent
|
context
|
favorite
| on:
Guide to running Llama 2 locally
I think you'd need to offload the model into CoreML to do so, right? My understanding is that none of the current popular inference frameworks do this (not yet, at least).
DennisP
on July 26, 2023
[–]
I did find some support from Apple:
https://machinelearning.apple.com/research/neural-engine-tra...
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: