WebAssembly threads support is one of the most important performance additions to WebAssembly. It allows you to either run parts of your code in parallel on separate cores, or the same code over independent parts of the input data, scaling it to as many cores as the user has and significantly reducing the overall execution time.
In this article you will learn how to use WebAssembly threads to bring multithreaded applications written in languages like C, C++, and Rust to the web.
I wrote Python for the last 10 years, and I always tend to write code in a “functional” way -
lambdaand so on, it makes me feel clean and happy. Following this warm feeling I decided to try a True functional language.
So I read a couple of tutorials online and went through the famous Learn You a Haskell for Great Good ! It was a long and painful journey - which is not over yet. After some retrospection I wanted to share my own introduction to Haskell based on my python experience. What I wished I had learnt first and what I wished I learnt later down the road.
Hey you! Have you ever wanted to become a CPU Whisperer? Me too! I’m a frontend web developer by trade but low-level assembly code and compilers have always fascinated me. I’ve procrastinated on learning either for a long time but after I recently picked up Rust and have been hanging out in a lot of online Rust communities it’s given me the kick in the butt to dive in. Rustaceans use fancy words and acronyms like auto-vectorization, inlining, alignment, padding, linking, custom allocators, endianness, system calls, LLVM, SIMD, ABI, TLS and I feel bad for not being able to follow the discussions because I don’t know what any of that stuff is. All I know is that it vaguely relates to low-level assembly code somehow so I decided I’d learn assembly by writing entirely too many brainfuck compilers in Rust. How many is too many? Four! My compile targets are going to be x86, ARM, WebAssembly, and LLVM.
As a web developer, I use relational databases every day at my job, but they’re a black box to me. Some questions I have:
What format is data saved in? (in memory and on disk)
When does it move from memory to disk?
Why can there only be one primary key per table?
How does rolling back a transaction work?
How are indexes formatted?
When and how does a full table scan happen?
What format is a prepared statement saved in?
In other words, how does a database work?
To figure things out, I’m writing a database from scratch. It’s modeled off sqlite because it is designed to be small with fewer features than MySQL or PostgreSQL, so I have a better hope of understanding it. The entire database is stored in a single file!
It teaches practical techniques for using the language better.
It teaches how the language works and why. What it teaches is firmly grounded in the ECMAScript specification (which the book explains and refers to).
It covers only the language (ignoring platform-specific features such as browser APIs) but not exhaustively. Instead, it focuses on a selection of important topics.
Last week, I drank my first cup of Rust. I learned concepts that are foreign in the languages I know: ownership, borrowing and lifetimes. This week I want to drink the second cup and see where it leads me.
This is the 2nd post in the Start Rust focus series.Other posts include:
My second cup of Rust (this post)
As CPU cores become both faster and more numerous, the limiting factor for most programs is now, and will be for some time, memory access. Hardware designers have come up with evermore sophisticated memory handling and acceleration techniques–such as CPU caches–but these cannot work optimally without some help from the programmer. Unfortunately, neither the structure nor the cost of using the memory subsystem of a computer or the caches on CPUs is well understood by most programmers. This paper explains the structure of memory sub systems in use on modern commodity hardware, illustrating why CPU caches were developed, how they work, and what programs should do to achieve optimal performance by utilizing them.
In this brief tutorial I’m aiming to make a small game for the ZX Spectrum, written 100% in assembler. I’ve done a bunch of projects for the speccy using SDCC; while I couldn’t completely escape assembly, the C compiler did a lot of heavy lifting for me. In other words, I traded control for convenience.
Now, since I’ve never actually done a 100% assembler project before (ignoring some tiny DOS TSR experiments), this is a learning experience for me as well. A lot of things I’ll do here is likely not going to be optimal, but what I present is what happened to work for me. All the source code is free to use in whatever way you wish, with no warranty whatsoever.
On implementing streaming algorithms, counting of events often occurs, where an event means something like a packet arrival or a connection establishment. Since the number of events is large, the available memory can become a bottleneck: an ordinary n-bit counter allows to take into account no more than 2n−1 events.
One way to handle a larger range of values using the same amount of memory would be approximate counting. This article provides an overview of the well-known Morris algorithm and some generalizations of it.
This guide is intended to help you gain a true understanding of SQL query speeds. It includes research that demonstrates the speed of slow and fast query types. If you work with SQL databases such as PostgreSQL, MySQL, SQLite, or others similar, this knowledge is a must.
Recently, I needed to learn this completely new language Clojure but couldn’t find what I wanted. So, I decided to create one while learning Clojure.
Clojure is a functional programming language and learning functional programming languages is sometimes hard if you’ve only had experiences with imperative languages. I have paid careful attention to make this page easy to understand for people who don’t have experiences with functional programming languages, so please don’t hesitate to read this page even if you don’t know anything about functional programming.
Hopefully, this page helps you learning functional programming and starting to write Clojure!
In this short writeup I’ll give examples of various multiprocessing libraries, how to use them with minimal setup, and what their strengths are.
If you want a TL;DR - I recommend trying out
lokyfor single machine tasks, check out Ray for larger tasks.
A differential equation-solving analog device is a reconfigurable computing platform which leverages the physics of the underlying substrate to implement dynamical system computations. In the last article, we discussed how a programmer would manually configure an analog device to perform computation. In this article, we will discuss how to automatically configure an analog device to run a target dynamical system. We frame this problem of automatically configuring the analog hardware as a compilation problem.
The compiler takes, as input, a specification of the dynamical system and a specification of the analog device. The dynamical system specification provides the differential equations to implement on the hardware. The analog device specification describes the input ports, output ports, programming interface (data fields and modes), and behavior of each block. This specification also defines all of the possible digitally settable connections available on the device. The compiler produces as output an analog circuit specification which describes a circuit made up of configured blocks. The analog circuit specification annotates the signals in the described circuit which implement dynamical system variables.
Working with data from an art museum API and from the Twitter API, this lesson teaches how to use the command-line utility jq to filter and parse complex JSON files into flat CSV files.
This is my own collection of hard-earned knowledge about how integers work in C/C++, and how to use them carefully and correctly. In this article, I try to strike a balance between brevity (easing the reader) and completeness (providing absolute correctness and extensive detail).
Whenever I read or write C/C++ code, I need to recall and apply these rules in many situations, such as: Choosing an appropriate type for a local variable / array element / structure field, converting between types, and doing any kind of arithmetic or comparison. Note that floating-point number types will not be discussed at all, because that mostly deals with how to analyze and handle approximation errors that stem from rounding. By contrast, integer math is a foundation of programming and computer science, and all calculations are always exact in theory (ignoring implementations issues like overflow).