I boil water in a sauce pot on the stove. Slosh it into my mug. Plunk in a tea bag and set the timer on my microwave for 3:30 so that I don’t forget and over-steep it. No milk. No sugar.
I write code and play games and stuff. My old username from reddit and HN was already taken and I couldn’t think of anything else I wanted to be called so I just picked some random characters like this:
>>> import random
>>> ''.join([random.choice("abcdefghijklmnopqrstuvwxyz0123456789") for x in range(5)])
'e0qdk'
My avatar is a quick doodle made in KolourPaint. I might replace it later. Maybe.
日本語が少し分かるけど、下手です。
Alt: [email protected]
I boil water in a sauce pot on the stove. Slosh it into my mug. Plunk in a tea bag and set the timer on my microwave for 3:30 so that I don’t forget and over-steep it. No milk. No sugar.
Have you tried Resonance? It’s a mystery adventure game set in modern times where you play as four different characters whose stories interconnect. It’s been a while since I played it (a decade or so?) but I remember that it had an interesting game mechanic that let you use memories like items in various interactions, as well as a number of puzzles that I rather liked the design of.
artificial gestation
The word “matrix” literally means “womb” in its older sense.
It’s not a GUI library, but Jupyter was pretty much made for the kind of mathematical/scientific exploratory programming you’re interested in doing. It’s not the right tool for making finished products, but is intended for creating lab notebooks that contain executable code snippets, formatted text, and visual output together. Given your background experience and the libraries you like, it seems like it’d be right up your alley.
The Wikipedia article for hqx points out that an implementation exists as a filter in ffmepg.
You can run a command line conversion of e.g. a PNG -> PNG using hqx upscaling like: ffmpeg -i input.png -filter_complex hqx=4 output.png
The =4
is for 4x upscaling. The implementation in my version of ffmpeg supports 2x, 3x, and 4x upscaling.
As a quick and dirty way to get semi-live preview, you can do the conversion with make
and use watch make
to try to rebuild the conversion periodically. (You can use the -n
flag to increase the retry rate if the default is too long to wait.) make
will exit quickly if the file hasn’t changed. Save the image in your editor and keep an image viewer that supports auto-reload on change open to see “live” preview of the output. (e.g. eog
can do it, although it won’t preserve size of the image – at least not in the copy I have, anyway; mine’s a bit old though.)
Sample Makefile:
output.png : input.png Makefile
ffmpeg -y -i input.png -filter_complex hqx=4 output.png
Note the -y
option to tell ffmpeg to overwrite the file; otherwise it will stop to ask you if you want to overwrite the file every time you save, and in case you’re not familiar with Makefiles, you need a real tab (not spaces) on the line with the command to run.
ffmpeg also appears to support xbr (with =n option as well) and super2xsai if you want to experiment with those too.
I’m not sure if this will actually do what you want artistically, but the existing implementations in ffmpeg makes it easy to experiment with.
I don’t know if there are any existing implementations that work well enough yet for it to actually be relaxing, but it might be possible to set up a hands-free IF experience by hooking up speech-to-text and text-to-speech tools to the game.
Can Z3 account for lost bits? Did it come up with just one solution?
It gave me just one solution the way I asked for it. With additional constraints added to exclude the original solution, it also gives me a second solution – but the solution it produces is peculiar to my implementation and does not match your implementation. If you implemented exactly how the bits are supposed to end up in the result, you could probably find any other solutions that exist correctly, but I just did it in a quick and dirty way.
This is (with a little clean up) what my code looked like:
#!/usr/bin/env python3
import z3
rand1 = 0.38203435111790895
rand2 = 0.5012949781958014
rand3 = 0.5278898433316499
rand4 = 0.5114834443666041
def xoshiro128ss(a,b,c,d):
t = 0xFFFFFFFF & (b << 9)
r = 0xFFFFFFFF & (b * 5)
r = 0xFFFFFFFF & ((r << 7 | r >> 25) * 9)
c = 0xFFFFFFFF & (c ^ a)
d = 0xFFFFFFFF & (d ^ b)
b = 0xFFFFFFFF & (b ^ c)
a = 0xFFFFFFFF & (a ^ d)
c = 0xFFFFFFFF & (c ^ t)
d = 0xFFFFFFFF & (d << 11 | d >> 21)
return r, (a, b, c, d)
a,b,c,d = z3.BitVecs("a b c d", 64)
nodiv_rand1, state = xoshiro128ss(a,b,c,d)
nodiv_rand2, state = xoshiro128ss(*state)
nodiv_rand3, state = xoshiro128ss(*state)
nodiv_rand4, state = xoshiro128ss(*state)
z3.solve(a >= 0, b >= 0, c >= 0, d >= 0,
nodiv_rand1 == int(rand1*4294967296),
nodiv_rand2 == int(rand2*4294967296),
nodiv_rand3 == int(rand3*4294967296),
nodiv_rand4 == int(rand4*4294967296)
)
I never heard about Z3
If you’re not familiar with SMT solvers, they are a useful tool to have in your toolbox. Here are some links that may be of interest:
Edit: Trying to fix formatting differences between kbin and lemmy
Edit 2: Spoiler tags and code blocks don’t seem to play well together. I’ve got it mostly working on Lemmy (where I’m guessing most people will see the comment), but I don’t think I can fix it on kbin.
If I understand the problem correctly, this is the solution:
a = 2299200278
b = 2929959606
c = 2585800174
d = 3584110397
I solved it with Z3. Took less than a second of computer time, and about an hour of my time – mostly spent trying to remember how the heck to use Z3 and then a little time debugging my initial program.
What I’d do is set up a simple website that uses a little JavaScript to rewrite the date and time into the page and periodically refresh an image under/next to it. Size the image to fit the remaining free space of however you set up the iPad, and then you can stick anything you want there (pictures/reminder text/whatever) with your favorite image editor. Upload a new image to the server when you want to change the note. The idea with an image is that it’s just really easy to do and keeps the amount of effort to redo layout to a minimum – just drag stuff around in your image editor and you’ll know it’ll all fit as expected as long as you don’t change the resolution (instead of needing to muck around with CSS and maybe breaking something if you can’t see the device to check that it displays correctly).
There’s a couple issues to watch out for – e.g. what happens if the internet connection/server goes down, screen burn-in, keeping the browser from being closed/switched to another page, keeping it powered, etc. that might or might not matter depending on your particular circumstances. If you need to fix all that for your circumstances, it might be more trouble than just buying something purpose built… but getting a first pass DIY version working is trivial if you’re comfortable hosting a website.
Edit: If some sample code that you can use as a starting point would be helpful, let me know.
My guess is that if browsers as we know them weren’t invented, HyperCard would’ve become the first browser eventually. No idea where things would progress from there or if it’d have been better or worse than the current clusterfuck. Maybe we’d all be talking about our “web stacks” instead of websites, and have various punny tools like “pile” and “chimney” and “staplr”. Perhaps PowerPoint would’ve turned into a browser to compete with it.
If browsers were invented but JavaScript specifically was not, we’d probably all be programming sites in some VB variant like VBScript (although it might be called something different).
You can’t really, as others have pointed out, but I like Philip K Dick’s definition of reality: “Reality is that which, when you stop believing in it, doesn’t go away.”
GPT4-Vision can do it, sort of. It doesn’t have a particularly great understanding of what’s going on in a scene, but it can be used for some interesting stuff. I posted a link a few weeks back to an example from DALL-E Party, which hooks up an image generator and an image describer in a loop: https://kbin.social/m/[email protected]/t/661021/Paperclip-Maximizer-Dall-E-3-GPT4-Vision-loop-see-comment
merde posted a link in the comments there to the goatpocalypse example – https://dalle.party/?party=vCwYT8Em – which is even more fun.
I mean, we all know what happened when old Godzilla was hoppin’ around Tokyo city like a big playground… right?
Didn’t the GDPR have a data portability rule requiring that sites provide users the ability to easily export their own data? Does that not apply to Lemmy for some reason – or, am I misremembering it? (I remember account data download being a big deal a while back on reddit, but it’s been a few years…)
I tried messing around with the colors a bit in an image editor and this was the best adaptation I could make: https://files.catbox.moe/03k8sc.png
Yeah; I also tried subbing in case that kicks off federation and searched a few titles to see if they ended up in random incorrectly as well (stuff like that happens sometimes with kbin). The magazine has seen a few microblogs mentioning the channel, and it clearly picked up the avatar/icon, description, etc. somehow, but doesn’t seem to be getting any videos as threads/posts and I couldn’t find any floating around disconnected either. I think kbin most likely doesn’t understand what PeerTube is publishing through AP, but there could always be federation weirdness or something.
Doesn’t seem to work right on kbin, unfortunately, although it does show up as a magazine: https://kbin.social/m/[email protected]
Reminds me a bit of Kammy Koopa
I don’t. I use the timer on my microwave.