In this article, I will describe Rust return type polymorphism (a.k.a. generic returns), a feature that I recently discovered and that I have been pretty intrigued about.
Rust has emerged for the fifth year in a row as the most loved programming language in a developer survey carried out by Stack Overflow. There are various reasons why developers love Rust, one of which is its memory safety guarantee.
Rust guarantees memory safety with a feature called ownership. Ownership works differently from a garbage collector in other languages, because it simply consists of a set of rules that the compiler needs to check at runtime.
I’ll talk about how you can try it, what it’s for, why I made it, and some problems I ran into while writing it.
Welcome to the final post in our “testing embedded Rust” series. In the previous posts we covered how to test a Hardware Abstraction Layer, how to test a driver crate and how to set up a GitHub Actions (GHA) workflow to include Hardware-In-the-Loop (HIL) tests. In this blog post we’ll cover three different approaches to testing an embedded application.
Rust is certainly not the easiest of programming languages, especially at first glance, but once you can overcome the initial “wall of fear” and start to grasp some of the key concepts, Rust becomes a language that you are going to love and you will probably be looking for more and more excuses to use it and learn it further. For this reason, we wanted to collect a list of resources that can help new Rust adventures to find their path towards becoming real “rustaceans”.
It’s important to mention that this list is totally subjective and not comprehensive. We are only listing material that we had a chance to explore and that we enjoyed. We are sure there is still a lot of great content out there that we haven’t found yet! So if you think something is missing here, let us know in the comments box below! We will also be mentioning some paid content, but we are not receiving any fee or using referral links when we mention these resources.
Rust has the concept of zero-sized types, or ZSTs for short. These are types that hold no information as part of their layout. A common misconception, however, is that this makes them trivial. Rather, they offer the necessary properties for a complex interactions between the type system and values. In the following text I will explore how they give rise to mathematical reasoning within Rust, but also show how this provides concrete application. We will work around restrictions in Rust’s trait system, or show how libraries can side-step long-term commitments to single dependencies without breaking changes according to Semantic Versioning.
In the course of trying to figure out how to smoothly zoom timelines of a billion trace events, I figured out a cool tree structure that I can’t find elsewhere online, which it turned out two of my friends have independently derived after not finding anything on their own searches. It’s a way of implementing an index for doing range aggregations on an array (e.g “find the sum/max of elements [7,12]”) in
O(log N)time, with amortized constant time appends, a simple implementation (around 50 lines of Rust), low constant factors, and low memory overhead.
I’d like to think that my understanding of “async Rust” has increased over the past year or so. I’m 100% onboard with the basic principle: I would like to handle thousands of concurrent tasks using a handful of threads. That sounds great!
And to become proficient with async Rust, I’ve accepted a lot of things. There are blue functions and red functions, and red (async) functions are contagious.
One of the distro maintainers of my distro of choice, Gentoo, filed a bug report with the
crytographysaying that the switch broke builds on several platforms that Gentoo still supports. The
cryptographyauthors replied that those platforms are not really used anymore, and that they were going to stick with Rust because it has better memory safety than C. They also argued that it is better to force better programming languages on people because of better security.
At first glance, it appears that the better argument is on the side of the
cryptographymaintainers, but after thinking about it carefully, I think they are wrong.
There is always a certain amount of tension between the goals of those using older, less-popular architectures and the goals of projects targeting more mainstream users and systems. In many ways, our community has been spoiled by the number of architectures supported by GCC, but a lot of new software is not being written in C—and existing software is migrating away from it. The Rust language is often the choice these days for both new and existing code bases, but it is built with LLVM, which supports fewer architectures than GCC supports—and Linux runs on. So the question that arises is how much these older, non-Rusty architectures should be able to hold back future development; the answer, in several places now, has been “not much”.
The latest issue came up on the Gentoo development mailing list; Michał Górny noted that the Python cryptography library has started replacing some of its C code with Rust, which is now required to build the library. Since the Gentoo Portage package manager indirectly depends on cryptography, “we will probably have to entirely drop support for architectures that are not supported by Rust”. He listed five architectures that are not supported by upstream Rust (alpha, hppa, ia64, m68k, and s390) and an additional five that are supported but do not have Gentoo Rust packages (mips, 32-bit ppc, sparc, s390x, and riscv).
In 2017 I worked on the Stylo project, uplifting Servo’s CSS engine (“style system”) into Firefox’s browser engine (“Gecko”). This involved a lot of gnarly FFI between Servo’s Rust codebase and Firefox’s C++ codebase. There were a lot of challenges in doing this, and I feel like it’s worth sharing things from our experiences.
Macros are a language feature which is very far in the “more power” side of the chart. Macros give you an ability to abstract over the source code. In exchange, you give up the ability to (automatically) reason about the surface syntax. As a specific example, rename refactoring doesn’t work 100% reliably in languages with powerful macro systems.
I do think that, in the ideal world, this is a wrong trade for a language which wants to scale to gigantic projects. The ability to automatically reason about and transform source code gains in importance when you add more programmers, more years, and more millions of lines of code. But take this with a huuuge grain of salt — I am obviously biased, having spent several years developing Rust IDEs.
That said, macros have a tremendous appeal — they are a language designer’s duct tape. Macros are rarely the best tool for the job, but they can do almost any job. The language design is incremental. A macro system relives the design pressure by providing a ready poor man’s substitute for many features.
In this post, I want to explore what macros are used for in Rust. The intention is to find solutions which do not give up the “reasoning about source code” property.