Is it? This review is pretty thorough with some realistic benchmarks. Spoiler: it won’t replace an entire data center, but it works for model development and testing. Enjoy!
Meh. Should have tested ggml performance instead. Using the Metal build of llama.cpp.
Also 460€ for an additional 16GB of RAM is insane.
Oh yes, it all comes at quite a cost.
May I ask what additional insight would be gained from that benchmark in your view? Of course, being able to access the GPU power is important, but I am genuinely curious about different perspectives. Thanks.
It is quite optimized to work on things different than a GPU. And it has a quite good Metal backend (I heard). And it uses quantization and might fit a decent sized model into memory. I don’t know of any advantage PyTorch might have on that platform.
So it’d really show us how much you can squeeze out of the platform.
Thank you for the extra explanation.
I’m gravitating toward this platform in part due to the fact the workstation would be in my sleeping area. The ability to run a long job while I sleep without excess fan noise is worth quite a bit to me.
Of course, it has to be actually useful, too. My main local compute tasks will be the usual development, data pre-processing, lighter fine tuning jobs and end-user model testing.
This unit will surely suffice but I have to balance the price premium with my need for a quiet bedroom. So thanks for the additional perspective.
Completely missing in the Lemmy post “summary,” from the W&B’s Conclusion portion:
We are still miles apart from the desktop NVIDIA GPUs and the same analysis from the M1Pro hold today. It’s nice seeing Apple capable of improving the GPU performance over the previous generation, but we will probably have to wait to replace our NVIDIA GPUs.
Don’t get me wrong, the performance per watt is good but we are still far behind what you get on any current Nvidia desktop GPU. Check this report to see how they compare against NVIDIA.