Apple doesn’t provide this feature because it would be used for pirating movies. BUT, you can buy a device that sits in between your TV and your AppleTV to do just that. It’s called HDMI Capture.
Apple doesn’t provide this feature because it would be used for pirating movies. BUT, you can buy a device that sits in between your TV and your AppleTV to do just that. It’s called HDMI Capture.
I get that this is a bug, but it kinda sucks that people feel it’s all right to act this way. Software is hard and unless you’re using a language with zero-overhead iteration you’re probably writing your drivers in C and iterating with a for-loop like our ancestors did. Off by one errors are stupidly common and everyone is human.
I mean, fuck mega corporations. This is still cringeworthy.
That being said, it’s going to be fun to see quality differences in these operating systems in a few years because, as far as I know, Apple would rather force Swift into the systems-level language space than adopt a memory-safe language today.
Meanwhile Microsoft, Google, and Amazon, etc are all investing heavily in Rust by integrating it into their platforms.
I guess a more modern example you might run into is something like Rust’s no_std environment; which strips out the standard library of the language that doesn’t work on every device the language is designed to target (namely microcontrollers that don’t even have an operating system on them). Or like, maybe you’re writing your own operating system.
Another example comes to mind of a company, General Magic, that designed a programming language with a similar Capabilities system meant to restrict access to functions and code on their devices with the idea of copyright enforcement in mind as a primary use case. There’s a documentary about the device if you’re interested: https://www.generalmagicthemovie.com
There’s languages designed with Capabilities in mind. Like, whatever starts the program gets to decide what functionality is exposed to the running program. It’s great for situations where you might run untrusted code and want to, as an example, not allow network access, or filesystem access.
More generally there’s also sandboxing techniques that runtimes provide. Webassembly for instance is designed for programs to run in their own memory space with a restricted set of functions and, again, Capabilities. This might be nice if you ever work on a cloud application that allows users to upload their own programs and you want to impose limits on those programs. Think AWS Lambda, except the programs running wouldn’t necessarily even have access to the filesystem or be able to make web requests unless the user configures that.
It might be a good design space for even more esoteric areas, like device drivers. Like, why worry if your GPU drivers are also collecting telemetry on your computer if you can just turn off that capability?
There’s older applications of sandboxing that are a bit further from what you’re asking as well; like, iframes on a webpage; allowing code served from different servers you don’t necessarily control to run without needing to worry about them reading access tokens from local storage.
Or even BSD Jails and chroot.
Good question 💖
Nanosaur!
Cult of the Lamb’s latest content release is bringing 2-player local coop