Why is inlining so important in C++? Clearly, it reduces function call overhead: if a function is inlined, there is no need to spend time setting up its arguments, jumping to it, creating a stack frame, and then undoing all that upon returning. More interestingly, though, inlining enables other compiler optimizations. In this article, I will show examples of constant propagation and loop-invariant code motion (LICM). Then, I will explain how inlining enables these optimizations to apply more widely and show an example of the consequences when that doesn’t happen.
In this paper, we introduce the No-Order File System (NoFS), a simple, lightweight file system that employs a novel technique called backpointer based consistency to provide crash consistency without ordering writes as they go to disk. A backpointer is added to each object in the file system, and the forward and backward pointers allows us to determine consistency by cross-checking the pointers. This simple inclusion surprisingly allows us to guarantee consistency in a wide range of crash scenarios.
We utilize a formal model to prove that NoFS provides data consistency in the event of system crashes; we show through experiments that NoFS is robust to such crashes, and delivers excellent performance across a range of workloads. Backpointer based consistency thus allows NoFS to provide crash consistency without resorting to the heavyweight machinery of traditional approaches.
Most graph theorists will agree that among the vast number of graphs that exist there are only a few that can be considered really interesting.
It is the aim of this House of Graphs project to find a workable definition of ‘interesting’ and provide a searchable database of graphs that conform to this definition. We also allow users to add additional graphs which they find interesting. In order to avoid abuse, only registered users can add new graphs.
People seem to think that writing a garbage collector is really hard, a deep magic understood by a few great sages and Hans Boehm (et al). Well it’s not. In fact, it’s rather straight forward. I claim that the hardest part in writing a GC is writing the memory allocator, which is as hard to write as it is to look up the malloc example in K&R.
A few important things to note before we begin. First, our code will be dependent on the Linux kernel. Not GNU/Linux, but the Linux kernel. Secondly, our code will be 32-bit and not one bit more. Thirdly. Please don’t use this code. I did not intend for it to be wholly correct and there may be subtle bugs I did not catch. Regardless, the ideas themselves are still correct. Now, let’s get started.
When it comes to figuring out how similar various pieces of data are from one another (and which is the closest matching one in a large group of candidates), simhashing is one of my favourite algorithms. It’s somewhat simple, brilliant in its approach, but still not obvious enough for most people (myself included) to come up with it on their own. Readers may be familiar with hashing algorithms such as MD5 or SHA, which aim to very quickly create a unique signature (hash) of the data. These functions are built so that identical files or blobs of data share the same hash, so you can rapidly see whether two blobs are identical or not, or if a blob still has the same signature after transmission to see if it was corrupted or not. Then different blobs, even if mostly the same, get an entirely different signature.
While simhashes still aim to have unique signatures for documents, they also attempt to make sure that documents that look the same get very similar hashes. That way, you can look for similar hashes to figure out if the documents are closely related, without needing to compare them bit by bit. It’s a statistical tool to help us find near-duplicates faster.
That’s a bit vague, but that’s alright. I’ll try to explain things in a way that gives a good understanding of things.
Here’s the thing: on every single software project or product I’ve worked on, JSON serialization has been a endless source of pain and bugs. It’s a push stream of trouble. Why is that so? What is so inherently complicated in the problem of JSON serialization that we always, by necessity, struggle with it?
It’s weird, because the JSON object model is really really simple. Moreover, it’s a bounded, finite set of problems, isn’t it? How do you serialize or deserialize JSON? Well, gee, you need to map between text and the various entities in the JSON object model. Specifically, you need to be able to handle the values
false, you need to handle numbers, strings and whitespace (all of which are unambiguously defined), and you need to handle arrays and objects of values. That’s it. Once you’ve done that, you’re done. There are no more problems!
Hello Gurlic world!
I’ve just launched a private-aware website for processing PDF documents last month, called PDF Shelter. It avoids communication with remote servers by performing all operations on the browser using open source JS libraries such as pdf-lib and pdf.js.
There are several new features in the pipeline, but I would like to show what we already have to gather some feedback on functionality, UX, etc. Thanks!
Today, I’ll take you on a another little walk through the land of program transformations. Let’s begin with a simple binary tree, with value of unknown type in the leaves, as well as the canonical
data T a = L a | B (T a) (T a)
map1 :: (a -> b) -> T a -> T b map1 f (L x) = L (f x) map1 f (B t1 t2) = B (map1 f t1) (map1 f t2)
As you can see, this
mapfunction is using the program stack as it traverses the tree. Our goal is now to come up with a
mapfunction that does not use the stack!
Why? Good question! In Haskell, there wouldn’t be a strong need for this, as the Haskell stack is allocated on the heap, just like your normal data, so there is plenty of stack space. But in other languages or environments, the stack space may have a hard limit, and it may be advised to not use unbounded stack space.
That aside, it’s a fun exercise, and that’s sufficient reason for me.
(In the following, I assume that tail-calls, i.e. those where a function end with another function call, but without modifying its result, do not actually use stack space. Once all recursive function calls are tail calls, the code is equivalent to an imperative loop, as we will see.)
Einstein famously characterized the strangeness of quantum mechanics as “spooky action at a distance”, which, if I had to pick one phrase about physics to be my favorite, would be a strong contender. I like to relate this to programming language design: there are some language features which are similarly spooky. Perhaps the most infamous of these is operator overloading.
André Garzia made a nice blog post called “Lua, a misunderstood language” recently, and unfortunately (but perhaps unsurprisingly) a bulk of HN comments on it was about the age-old 0-based vs. 1-based indexing debate. You see, Lua uses 1-based indexing, and lots of programmers claimed this is unnatural because “every other language out there” uses 0-based indexing.
I’ll brush aside quickly the fact that this is not true — 1-based indexing has a long history, all the way from Fortran, COBOL, Pascal, Ada, Smalltalk, etc. — and I’ll grant that the vast majority of popular languages in the industry nowadays are 0-based. So, let’s avoid the popularity contest and address the claim that 0-based indexing is “inherently better”, or worse, “more natural”.
It really shows how conditioned an entire community can be when they find the statement “given a list x, the first item in x is x, the second item in x is x” to be unnatural. :) And in fact this is a somewhat scary thought about groupthink outside of programming even!