Performance is one of the top reasons developers choose Rust for their applications. In fact, it’s the first reason listed under the “Why Rust?” section on the rust-lang.org homepage, even before memory safety. This is for good reason too–many benchmarks show that software written in Rust is fast, sometimes even the fastest. This doesn’t mean that everything written in Rust is guaranteed to be fast, though. In fact, it’s surprisingly easy to write slow Rust code, especially when attempting to appease the borrow checker by cloning or
Arc-ing instead of borrowing, a strategy which is generally recommended to new Rust users. That’s why it’s important to profile and benchmark Rust code to see where any bottlenecks are and to fix them, just like you would in any other language. In this post, I’ll demonstrate some basic tools and techniques for doing so, based on my recent experience working to improve the performance of the
Concerns over the performance of programs written in Python are often overstated — for some use cases, at least. But there is no getting around the problem imposed by the infamous global interpreter lock (GIL), which severely limits the concurrency of multi-threaded Python code. Various efforts to remove the GIL have been made over the years, but none have come anywhere near the point where they would be considered for inclusion into the CPython interpreter. Now, though, Sam Gross has entered the arena with a proof-of-concept implementation that may solve the problem for real.
Working as a professional software engineer can be tiring: you’re often working on a small part of a larger system, making small changes and fixing bugs. Over time you also learn to rigidly follow best practises: write code that is efficient, document your functions, always check error codes, etc. It’s easy to forget the fun of just programming and the satisfaction of creating something.
I set out to try to reclaim some of that fun by setting myself a challenge: implement a Lisp interpreter in a weekend. How much could I achieve? I wasn’t sure but I figured it should be achievable. The nice thing about Lisp is the lack of syntax, meaning that parsing of the program text isn’t difficult. But there are still lots of other things to take into account. I was also curious as to how small (in terms of lines of code) a Lisp interpreter could be.
This is a follow-up to Error handling in Rust from a couple of days ago. The moment we want to use error propagation for different error types, we have to rely on trait objects with
Box<dyn Error>, which means we defer a lot of information from compile-time to runtime, for the sake of convenient error handling.
Which you might consider not convenient at all, because there’s some downcasting involved to get the original error back, and we rely on trait objects and dynamic dispatch to carry something like an error along our codebase. I’d rather have this information erased at compile time!
Every time I want to quickly understand something about an advanced type system or programming language concept I get lost when I see something like this on Wikipedia:
Linear type systems are the internal language of closed symmetric monoidal categories, much in the same way that simply typed lambda calculus is the language of Cartesian closed categories. More precisely, one may construct functors between the category of linear type systems and the category of closed symmetric monoidal categories.
Why there’s so much research around types if perfectly applying them to programming languages is impractical? 1 It turns out mathematicians are the ones behind the development of most of the work related to type systems – type theory.2 One might think they do this to support programming languages. After all, it’s part of the job of mathematicians to formalize what other disciplines developed with a more practical mindset. Infinitesimal calculus was greatly advanced by Isaac Newton to solve immediate problems in physics for example. But the development and study of type theory pre-dates programming languages! What problems were mathematicians trying to solve when they developed type theory, a curious programmer might ask.
What is it about voxels that makes people go crazy? Throughout the past decade, there have been SO many developers obsessed with shrinking them down to have as many as possible (1, 2, 3, 4, 5), admittedly including myself. It’s exciting both as a developer and to the gaming community to see amazing algorithms produce so much detail and think about the possibilities it brings to virtual worlds.
Voxels = destruction!
Voxels = building!
Voxels = the universe!
And yet, there’s no commercially available, general-purpose, widely-used voxel engines which games are built on. The term “(micro) voxel engine” is basically synonymous with vaporware. We see jaw-dropping showcases that are sometimes accompanied by hyperbolic claims (“UNLIMITED DETAIL”) and then radio silence. But why?
After using code coverage information and real-world files to improve an audio metadata parser I am writing in Zig, the next step was to fuzz test it in order to ensure that crashes, memory leaks, etc were ironed out as much as possible.
The problem was that I had no idea how to fuzz Zig code. While Zig uses LLVM and therefore in theory has access to
libFuzzer, the necessary integration with
SanitizerCoveragehas yet to be implemented (see also this comment on a closed PR), so I figured I would try to to find another avenue in the meantime.
In Linux there are so many permission mechanisms, depending on exactly what you want to do, it dazzles the mind. There is suid, dbus policies, polkit, Linux capabilities, files attributes, PAM modules, SELinux and the list goes on. It does not surprise then, that choosing the correct approach can become paralyzing or give anxiety inducing.
My original aim was to write a chess program smaller than 1024 characters. I could not do it, so far. Even when I dropped the nitty gritty details of the FIDE rules, like castling and en-passant capture, I could not get the size much below 1200 characters.
So I shifted my aim somewhat, and wrote something less minimalistic in up to 2000 characters of source-code. This gave me enough space to implement a (hash) transposition table, checking of the legality of the input moves, and full FIDE rules. Except for under-promotions, which I considered a dull input problem, since the program is unlikely to ever find itself in a situation where it would be useful to play one.
(For real purists: a close-to-minimal version that does understand full FIDE rules including under-promotion can be found here. It measures 1433 characters. The under-promotions are implemented in a single line that wastes 32 characters. To play one, type 1, 2, or 3 as the 5th character of the input move, for promotion to R, B, or N, respectively. If you type nothing, or 0, promotion is to Q.)
As far as I am aware, this still makes micro-Max the smallest C Chess program in existence. A close competitor for this honor, Toledo, measures 2168 characters. Despite its smaller size, micro-Max seems to beat Toledo easily.
C++ 17 has introduced the
std::optional<T>template class that is analogous to the Maybe/Optional monad implementation in many other languages. “Analogous” is doing a lot of work in this statement because the C++ type checker is not going to help you avoid dereferencing an empty optional like Rust, Haskell, Scala, Kotlin, TypeScript and many other languages will do.
That does not make it useless. As with many things in C++, we will be careful™ when using it and write only programs that do not dereference an empty optional.
In languages that deal mostly with reference types, an optional type can be implemented as an object that wraps a reference and a tag bit that tells if the optional has some data or nothing.1 In C++ on the other hand, the
std::optional<T>will inline the value type
Tonto itself. That means the general behavior of an optional of
Tdepends a lot on the specifics of the type it’s wrapping.
A few months back (at the start of this blog) I was thinking about interesting things you can find in C++, then I realized one thing: the keyword
classdoesn’t really have a good reason to exists!
This may seem a bit harsh said that way, but I will explain my statement later in the article.
Anyway, studying the reason for the word class to exist lead me to look into the history of the language. It was very interesting and I really enjoyed it.
So I decided to write a mini-series of articles about the history of C++ aiming to present concepts that may be outdated today, in C++20, and explain why some strange or debatable things were made that way in the past.
I’ve been using Go for a few years now, mostly in my open source project Lazygit. In my day job I use Ruby and Typescript, and I’ve also spent some time with Rust. Each of those languages have design quirks that can grind a developer’s gears, and although my own precious gears have been ground by every language I’ve used, Go is the only language that has made me feel indignant.
This series is my attempt to spell out exactly why. My goal is not to convince you that Go is an objectively bad language (I’m not qualified to make that judgment call), it’s to convince you that for certain people, working in Go feels like a constant struggle against stupid constraints.
Some of Go’s shortcomings will be resolved in the future, but I’m going to focus on what it’s like using the language today.
This post is about Go’s error handling.
Word processing is possible on the VIC-20 and can be surprisingly comfortable despite the small screen text area. Here I will show a variety of word processors each of which handles the 22 column restriction in a different way. In many ways this is a follow-on article to: Spreadsheets on the Commodore VIC-20.
The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.