- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
This isn’t Linux, but Linux-like. Its a microkernel built from the rust programming language. Its still experimental, but I think it has great potential. It has a GUI desktop, but the compiler isn’t quite fully working yet.
Has anyone used this before? What was your experience with it?
Note: If this is inappropriate since this isn’t technically Linux, mods please take down.
I don’t understand the obsession with rust.
From my personal experience I can tell you 2 reasons. The first is that this is the first general purpose language that can be used for all projects. You can use it on the web browser with web assembly, it is good for backend and it also is low level enough to use it for OS development and embedded. Other languages are good only for some thing and really bad for others. The second reason is that it is designed around catching errors at compile time. The error handling and strict typing forces the developer to handle errors. I have to spend more time creating the program but considerably less time finding and fixing bugs.
That sounds pretty great. I get sick of having to switch gears for every layer. As a hobbyist it is tough to remember five or six languages well enough when only coding something a few times a year.
Since I do embedded, scripting, web front and back end this is sure tempting.
I have been hesitant to try to learn yet another language (this would make…ummm… idk I lost count ages ago). But with all the hype I may break down and give it a whirl.
Sounds like python may be a better fit if its supported on the embedded devices you use as it will cover scripting and backend too. Rust has quite a learning curve and can be rather verbose.
I do use python quite a bit for scripting and backend, app, and I’ve used MicroPython a little bit, preferring C, C++ for embedded. It’s pretty great for what I need.
I might mess around with Rust out of curiosity anyway, though the downsides you mention make it less compelling for me, personally. I’m not a big fan of verbose languages (e.g., Java, though I have used it for some apps).
Messing around with rust is certainly worth it, as it can change the way you think in a way that improves code in whatever language you write.
If you are curious definitely do check it out! It’s a really cool language to learn and you’ll start to enjoy the fight the compiler puts up.
Sorry but I don’t see the reasoning backing the enthusiasm for python. Sure, it is great for scripting (this includes machine learning), but why for anything else?
An easy language that works everywhere? This person is writing code a few times a year.
Wdym
I realize that even $2 systems are running full Linux distros these days but Python does not map to what I think of as “embedded”. If you have a full Python interpreter, it is already a pretty rich environment.
That said, this is what computing is starting to look like. There is less and less “bare metal”. I work with people that claim to be “firmware” engineers and then, when you look, you find out they have a full Ubuntu distro running and they may as well be running on a laptop.
I feel like C++ is as competent as Rust for any project and it’s definitely older.
Not sure I can think of anything I’d enjoy less than trying to build a web app in cpp
Before using Rust I was using C++ for most projects and while it is a really powerful language there were some big problems:
At least it won’t complain about rigid, skolem or occurrence check.
Rust was created because c++ was so bad. Just take a look at crates they need a whole lot less maintenance because less bugs.
My point wasn’t that C++ is good. My point was that C++ can and is used everywhere (desktop applications, web applications, OSs,…) and is older than Rust. So I feel that “this is the first general purpose language that can be used for all projects” is false. Probably “this is the first general purpose language that I (and many others) like to use for all projects” is true, but is a different claim.
TLDR: You said Rust was first language capable of system, app and web, it isn’t.
It depends on what “can be used” means. I really like C# and it “can be used” for that full stack C# for example can write out native machine code, can manually and precisely lay out memory, and can directly link to assembly language routines. You can write an OS in C#. Even as a fan though, I would certainly argue that it is the wrong tool for that job.
In the same vein, while I know C++ “can” be used for web dev, I would argue that anybody that tries to do so for any significant project is insane.
I am not sure I would use Rust for “everything” but I do think the claim that Rust is one of the first languages where it is reasonable or practical to choose it for any of these uses is valid. Rust code can be very high level and often does not much different than a scripting language. At the same time, it can go as low-level as you want. This article is about an OS in Rust ( and there are few ). Web dev in Rust is totally reasonable and there are a few popular frameworks available. Rust has one of the best WASM stories around.
It is good and rust improves on its gaping weaknesses.
Yeah I never made that claim the threads OP did.
That is definitely a “feeling”.
I know the evangelists can be somewhat overwhelming, but its popularity is not unwarranted. It’s fairly easy to pick up, has an incredibly enthusiastic and welcoming community. People like it because it’s incredibly performant, and its memory safe. In terms of DX it’s really a joy to work with. It just has a LOT going for it, and the main drawback you’ll hear about (difficulty) is really overblown and most devs can pick it up in a matter of months.
The main difficulty I have with Rust (what prevents me from using it), is that the maintainers insist on statically compiling everything. This is fine for small programs, and even large monolithic applications that are not expected to change very often.
But for the machine learning projects I work on, I might want to include a single algorithm from a fairly large library of algorithms. The amount of memory used is not trivial, I am talking about the difference between loading a single algorithm in 50 MB of compiled code for a dynamically loadable library, versus loading the entire 1.5 GB library of algorithms of statically linked code just to use that one algorithm. Then when distributing this code to a few dozen compute nodes, that 50 MB versus 1.5 GB is suddenly a very noticeable difference.
There are other problems with statically linking everything as well, for example, if you want your application to be written in a high-level language like Python, TypeScript, or Lisp, you might want to have a library of Rust code that you can dynamically load into the Python interpreter and establish foreign function bindings to the Rust APIs. But this is not possible with statically linked code.
And as I understand, it is a difficult technical problem to solve. Apparently, in order for Rust to optimize a program and guarantee type safety and performance, it needs the type information in the source code. This type information is not normally stored into the dynamically loadable libraries (the
.so
or.dll
files), so if you dynamically load a library into a Rust program its type safety and performance guarantees go out the window. So the Rust compiler developers have chosen to make everything as statically compiled as possible.This is why I don’t see Rust replacing C any time soon. A language like Zig might have a better chance than Rust because it can produce dynamically loadable libraries that are fully ABI compatible with the libraries compiled by C compilers.
You can load Rust into Python just fine. In fact, several packages have started requiring a Rust compiler on platforms thst don’t get prebuilt binaries. It’s why I installed Rust on my phone.
The build files for Rust are bigger than you may expect, but they’re not unreasonably big. Languages like Python and Java like to put their dependencies in system folders and cache folders outside of their project so you don’t notice them as often, but I find the difference not that problematic. The binaries Rust generates are often huge but if you build in release mode rather than debug mode and strip the debug symbols, you can quickly remove hundreds of megabytes of “executable” data.
Rust can be told to export things in the C FFI, which is how Python bindings are generally accomplished (although you rarely deal with those because of all the helper crates).
Statically compiled code will also load into processes fine, they just take up more RAM than you may like. The OS normally deduplicates dynamically loaded libraries across running processes, but with statically compiled programs you only get the one blob (which itself then gets deduplicated, usually).
Rust can also load and access standard DLLs. The safety assertions do break, because these files are accessed through the C FFI which is marked
unsafe
automatically, but that doesn’t need to be a problem.There are downsides and upsides to static compilation, but it doesn’t really affect glue languages like Python or Typescript. Early versions of Rust lacked the C FFI and there are still issues with Rust programs dynamically loading other Rust programs without going through the C FFI, but I don’t think that’s a common issue at all.
I don’t see Rust replace all of C either, because I think Rust is a better replacement for C++ than for C. The C parts it does replace (parsers, drivers, GUIs, complex command line tools) weren’t really things I would write in C in the first place. There are still cars where Rust just fails (it can’t deal with running out of memory, for one) so languages like Zig will always have their place.
Is it not possible for Rust to optimize out unused functions as with C? That seems …like a strange choice if so.
No Rust can do dead code elimination. And I just checked, Rust can do indeed do FFI bindings from other languages when you ask the compiler to produce dynamically linking libraries, but I am guessing it has the same problems as Haskell when it produces
.so
or.dll
files. In Haskell, things like “monad transformers” depend pretty heavily on function inlining in order to achieve good performance.So I am talking more about how Rust makes use of the type system to make decisions about when to inline functions which is pretty important when it comes to performance. You usually can’t inline across module boundaries unless modules are all statically linked. So as I understand it, if you enable dynamic linking in your Rust program, you might see performance suffer a lot as compared to static linking, and this is why most Rust people (as I understand it) just make everything statically linked by default.
I am not sure that is quite right. I dont think rust support just enabling dynamic linking of its dependencies. It can talk to dynamically linked libraries - which is how FFI works. And you can compile rust crates to be dynamically linked. But when you are going down this route you are talking over the C ABI. This requires some effort on the code author to make their APIs exportable to C types and means you lose all safety when talking over the C ABI.
I also dont think that rust inlines across a crate boundary unless the function is marked as inline or LTO is enabled - inlining across crate boundaries is expensive and so only done when explicitly needed or asked for it. It is more that you lose features like generics and traits and other things that are not supported over the C API.
Do you need inlining if you just use fixed monad transformers?
I am not sure what you mean by “fixed” monad transformers, if you mean writing your own
newtype
where the functor variable is the only type variable, essentially what you are doing is hand-inlining the monad transformer, and so no, if you inline by hand, then the compiler doesn’t need to do it.Haskell inlines all
newtype
definitions automatically, so if your monad transformer has all of the type variables bound (except for the functor variable, because that is a special case the Haskell compiler is specifically designed to handle) the compiler will usually reduce those to ordinary lambda expressions automatically, and lambda expressions usually optimize to the most efficient machine code.The only time the compiler cannot reduce a
newtype
to an efficient lambda is if the non-functor variables, e.g. the state type variable or the exception type variable, are unbound. Those values could become anything at all at its call site, limited only by the constraints set by the type context. So the type context information, a lookup table of type class instances, must be associated with that lambda expression, and in order to do that, the compiler must create a closure around those values. Creating closures allocates values on the heap, and this is much, much slower than efficient lambda expressions, and no faster than allocating a data constructor as with Free Monads.Alexis King did a presentation on it where she explains all of this extremely well, if you are interested: https://youtu.be/0jI-AlWEwYI
It is a bit long, but at 17:40 or so she starts talking about strategies for how monads and effects can be implemented in the GHC intermediate code, and compares Free Monads and effects to monad transformers. At 21:15 or so she begins to explain how
newtype
types can be optimized away completely,newtype
constructors don’t exist at all in the low-level code, they are a “zero-cost abstraction.” On the other hand,data
constructors (used for Free monads and effects) always allocate something on the heap which is an order of magnitude slower.Then at around 27:45 she begins to show how
newtypes
with type variables cannot be inlined across module boundaries for the reason I explained above (type context tables associated with closures), and so monad transformers cannot be optimized across module boundaries.Yep, I mean like
newtype MyT m a = MyT (ReaderT MyEnv (StateT MyState m) a)
. But one can useReaderT MyEnv (State MyState m) a
directly as well.I found the MTL style (tagless final) a bit problematic anyway, so I wanted to comment about this.
It is and it does.
So you’re working on your machine learning projects in Zig?
No, Python and C++, which were the languages chosen by both Google and Facebook for their AI frameworks.
I just think if a systems programming language like Rust does not provide a good way to facilitate dynamic linking the way C, C++ does, these languages will start running into issues as the size of the compiled binaries become ever larger and larger. I think we might all be a little too comfortable with the huge amount of memory, CPU cycles, and network bandwidth that we have nowadays, and it leads to problems when you want to scale-up a deployment. So I think Zig might make a more viable successor to C or C++ as a systems programming language than Rust does.
That said, I definitely think Rust and Haskell’s type systems are much better than that of Zig.
deleted by creator
And the fucking MIT License
Yes, as much as I appreciate memory safety and rust in particular. I’m very worried by this pivot away from copyleft and GPL. Specially the rewriting in rust phenomenon of fundamental stuff. It’s safer, yes, but they’re all pretty much non GPL and it seems very risky to me. Make no mistake, the industry is riding this wave to move away from copyleft to permissive licenses.
I wish that people understood the importance of FSF and GNU
Well that is rather insidious. Crap. They probably understand the reasons for the GPL very well. Doesn’t mean they support them.
I’m sure there’s some community pull as well, because most of the rust ecosystem seems to be converged on MIT. But what despairs me is the wilful sidelining of GPL and everything GNU by some open source community members/corporate people. So yeah, you’re probably right
You make it sound like a conspiracy. Just accept that some things are organically more popular, like MIT which is very easy to understand and use for normies. It’s not perfect, but that’s how it is
MIT is a terrible license that only got popular because of the popularity of the anti-open source movement in the last decade.
one could write books about what’s wrong with the MIT license.
It could even theoretically be argued that MIT has in some ways allowed big tech companies to proliferate, by effectively allowing them to take open-source code, modify it, and then close it off in their proprietary software. What does this mean? It means that the work of countless dedicated open-source developers can be co-opted by companies that have done almost none of the work, reaping several billions of dollars, while the developers who actually did the work make no money. It’s like opening your doors wide only to have someone come in, take your stuff, and sell it back to you.
In contrast, in licenses like the GPL, there’s a requirement that if you use GPL-licensed code and modify it, your new code also has to be open-source under the GPL.
I love the free software ideals, but I think we’ve got a different understanding about what constitutes a good and a bad license. What many people seem to forget about software licenses is that there are these other countries besides America. They couldn’t care less about whatever judges rule over there. A good license is a dumb simple license that anyone can enforce in court with ease. A bad license is a convoluted license that crumbles like a house of cards in court. I read the GPL. It’s convoluted. It’s an opaque terms of service agreement riddled with legal boilerplate disguised as software license. A poor execution of the ideals I hold. I only use the GPL as a formality to say that I support the free software ideals, but I have zero confidence in enforcing the GPL.
I’d like to correct you by saying that GPL is DEFINITELY enforceable in countries other than america. I can’t say about every country (tho that will be the case with every license), but for instance it’s definitely enforceable in europe. For example in Germany and France there have been a few lawsuits that the FSF helped carry out against immoral companies.
GPL Enforcement Cases - FSFE
If you’re in Germany the Institute for Legal Questions on Free and Open Source Software is a law firm that literally works only on enforcing the GPL, FOSS licenses and other technological human rights that are being ignored by big tech.
If you want to be even more sure about European Enforcement you may want to checkout the EUPL v1.2 which is GPLv3 compatible.
In other countries, such as Japan, the GPL is also enforceable, so long as you treat it the same way as copyright, so you’re willing to sue companies that you know are stealing from you (the FSF can help you if you can’t afford it).
Russia and China don’t care, but… it’s Russia and China, that’s not really news, is it? :)
EDIT: I will write a full article about the legal enforce-ability of FOSS licenses such as the GPL before the end of the year
Rust devs be like:
Shame that we don’t have a proper copyleft license tho? GPL, as nice as the intentions are, is a license so convoluted that I’m not sure whether it’d hold up in court in my country.
It’s a system programming language that isn’t C or C++.
Edit to add: How did Go get on that page? That’s a stretch.
deleted by creator
Then we arrive at Rust as a natural outcome.
And it’s of course possible to migrate to Rust from C or C++ progressively, fish has almost got it done.
Did fish migrate progressively tho? I thought they swap out everything at once as soon as the rewrite is ready
Well, they are not going to release in between, but their rewrite still “works” at each commit being a hybrid of Rust and C++.
Ah, alright
Rust isn’t just a new improved version of C or C++. It’s completely new and it feels completely different to use Rust. In a positive way
Guaranteed memory saftey
(notice: I am not a Rust or C/C++ expert)
Doing all that is creating a completely separate programming language from C. Rust is that programming language.
Rust does that with modules and crates.
You mean having consistent/universal style guidelines? Rust pretty much has that with rustfmt.
Safe Rust is memory safe (using things like the borrow checker), and Unsafe Rust is (usually?) separated using the
unsafe
keyword.Although Unsafe Rust seems to be quite a mess, idk haven’t tried it
Rust has macros, iterators, lambdas, etc. C doesn’t have those. C++ probably has those but in a really weird C++ way.
deleted by creator
I’d say no. Programming safely requires non-trivial transformation in code and a radical change in style, which afaik cannot be easily done automated.
Do you think that there’s any chance to convert from this to this? It requires understanding of the algorithm and a thorough rewrite. Automated tools can only generate the former one because it must not change C’s crooked semantics.
deleted by creator
I think there’s no need to stick with one particular language. It benefits to learn more languages and bring the “good parts” of their design into your code whatever you are writing it in.
Btw It happens that I’ve learned a bit of RISC-V, with Rust.
C and C++ can’t be fixed retroactively because old code must remain compatible.
If you’re going to implement your own C dialect, you may as well just write a new language.
For C++ that’s Rust, for C that’s probably Zig (Zig will just let you import existing C files, which helps with porting). Carbon and experimental languages like Jakt may also work, it all depends on what your priorities are.
Idk what the iso fuck up and I don’t code enough to appreciate whatever technical debt exists in either so I am probably sound like an idiot but…
Since I do infosec, the glaring issue for me is not being memory safe.
Conservatives always loose, especially in tech
The idea is less bugs due to stricter rules when developing and compiling. You can understand that.
Then, also more access to build tools and high level programming without changing languages.
If you have no need for that, then just know others do and it’s a great thing.