also at beehaw

  • 2 Posts
  • 62 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle



  • degree in Visual Art, work in digital asset management for a marketing (blech) studio. I’d love to get into a DAM position at somewhere less ethically awful, like a symphony or museum or something, buuut my position pays really well relatively speaking to other similar similar jobs I’ve looked at, so that’ll have to wait until I feel more established in life.

    took a couple basic comp-sci classes in college, though, and went to a coding bootcamp before I got my current position. running linux on my laptop, might switch to it on my desktop. I make use of bash for renaming files a lot at my job.

    there’s a lot about tech-heavy areas that interests me, but it’d drive me crazy to be around too much of it. I think there’s a lot of good in the liberal arts that tends to get missed by the sort of hard rationalists that tend to hang out in tech spaces.






  • My high school and college journals are filled with so much angst about crushes and “do they like me? don’t they like me?” that it’s physically difficult to re-read them now, hah.

    I had a crush on a redhead from about 10 until I left for college (it was a small town), then crushed on the various guys in my dorm and friend group (and one hot artist girl in a philosophy class) until I decided I needed to practice dating in junior year and actually went on a few thanks to Tinder. Though I didn’t escape entirely as I had a couple crushes on regular customers when I worked in an art supply store after graduating.

    Now I’m happily partnered and do not miss the mental anxiety of crushes, though there’s a twinge of excitement in the idea of having a crush that will always be nostalgic.




  • So I’m no expert at running local LLMs, but I did download one (the 7B vicuña model recommended by the LocalLLM subreddit wiki) and try my hand at training a LoRA on some structured data I have.

    Based on my experience, the VRAM available to you is going to be way more of a bottleneck than PCIe speeds.

    I could barely hold a 7B model in 10 GB of VRAM on my 3080, so 8 GB might be impossible or very tight. IMO to get good results with local models you really have large quantities of VRAM and be using 13B or above models.

    Additionally, when you’re training a LoRA the model + training data gets loaded into VRAM. My training dataset wasn’t very large, and even so, I kept running into VRAM constraints with training.

    In the end I concluded that in the current state, running a local LLM is an interesting exercise but only great on enthusiast level hardware with loads of VRAM (4090s etc).






  • I purged my comments and deleted my main account on July 1st, which was surprisingly emotional for me. I use Alien Blue on mobile, which still works so far, but now that my main account is logged out, I’ll never be able to log another account in because authentication has been broken in Alien Blue for a while.

    I’m keeping Alien Blue installed for two reasons: one, for checking on a friend who only posts updates on reddit, and two, I read r/games a couple times a week for headlines and discussion. Lemmy just doesn’t have the same level of engagement or discussion as r/games; even though there’s a certain brand of insufferable commenters there, the majority of people post thoughtful comments that are more than one or two sentences long and those are the kinds of threads I like reading. Lemmy threads seem to be more shallow; lots of replies to the parent, but very few threads that go more than one or two comments deep.