This comment sums it up pretty nicely:
LOL innovative invention of swapping memory to storage…… maybe they can call it something cool like “cache”.
Apple being “innovative” my ass, lmao
Cache u inside
How bow dat?
Well, if that commenter had more than just a vague idea of caching and/or swapping, they would know that the right algorithm can make or break performance.
That paper is not “we invented caching”, but “this is how we make some certain models work well despite constraints imposed by RAM and flash storage.”
It’s a worthy job for an engineer or researcher. Not quite as innovative as the invention of the wheel, but still enough to write a paper on (and read it, if you can manage to understand it).
The easiest way to tell that something’s not really innovative is if the person describing it uses the word innovative.
Can you give an example of something that actually was innovative, that no-one called innovative?
The spoked wheel.
Interestingly, it looks as if nothing was really called innovative before 1960, with usage peaking in 2000, and its now in decline
Peaking in 2000 seems odd - I see or hear that word daily and that definitely wasn’t the case back in 2000. Interesting!
Corndogs
35 upvotes in the technology community…man you guys really are just all knee-jerk reactionaries and it really knowledgeable tech at all. git gud
Everyone likes to trash machine learning because the power requirements are high, but what they don’t realize is that we’re in the very first days of this technology (well, first couple decades of the technology being around, first few years of it being advanced enough to have anything to show off). Every technology that got bundled together into your phone was equally as useless when it was first invented. Honestly, compared to the development of most other technologies I’ve looked at, the pace of development in AI has been shocking.
Literally once a week, I see some news story about AI researchers delivering an order of magnitude speedup in some aspect of AI inference. The technique described here apparently allows for a 20x speedup on GPU’s.
Whispercpp works off the ML cores on the m series chips. It’s faster than my 1080ti that I have in a server doing the same things–by orders of magnitude. And it sips power.
Purpose built chips can be super powerful for their specific purposes.
Make Siri Great For Once
Huhum…?
Still working on that…
I’m sorry, try again later.
You’re triggering me lol
Found the following websites about you’re triggering me lol.
Sigh! [unzips] Go ahead…
deleted by creator
Tbh I’m more excited to see someone do use webnn, webgpu and petals together. Building smaller tighter models is good too.
I don’t understand the innovation, I already run LLMs and stable diffusion on a laptop from 2011.
I have no doubt it could be run on my Android phone.
Why the hell do we want to encourage people running MLMS on our phones?!! I don’t want to be part of some stupid pyramid scheme nonsense.
dōTERRA Phone
HerbaLife Galaxy S30
Haha
LLM≠MLM
😭
Here, you dropped this: /s
/s is for weaklings.
HEAR ME ROAR.
Siri could suck an order of magnetize less and work offline, for starters