If you’re visualizing it, almost every textbook sentence provides you with an opportunity to create a new image in your mind. As you progress further through the textbook, it will call back to more and more earlier concepts. In biochemistry, it’s things like the relationship between Gibbs free energy, enthalpy, entropy, and electrostatic potential; the amino acids; the nucleotides; different types of lipids; and a variety of major enzymes (i.e. DNA polymerase) and pathways (i.e. glycolysis). If you can figure out what those concepts are, and memorize them, you’ll be able to picture them when it mentions them casually in passing. If you can’t remember glutamine’s abbreviation or chemical structure, then every time the book mentions G (or is it E?), you’ll miss out on an opportunity to practice recalling it, or else you’ll have to interrupt your flow to look it up for the umpteenth time. This is a role for flashcards and super-convenient reference charts. Some knowledge is most helpful if you can access it in five seconds or less.
I’m hoping that this blog post could be helpful to someone who has decided to work in the area of symmetric key crypto. During my symmetric-key cryptanalysis journey in my grad school I had a fair amount of zig-zag-ing. Here, I’ll try to distill the key steps that enabled me to be productive in this area.
By symmetric key cryptanalysis considered are attacks on schemes such as block ciphers, stream ciphers, hash functions and AEAD schemes. An attack would be a violation of security promises made by the primitive, in time or space complexity less than projected by the primitive authors. This is regardless of whether the attack is practical or not. An unpractical attack cannot be fully verified and it has happened indeed that whole classes of peer reviewed cryptanalysis papers turn out to not work at all, which is, well, another topic.
These are the tape recordings of Richard Feynman’s 1961-64 Caltech Introductory Physics lectures, which form the basis of the book The Feynman Lectures on Physics. The original recordings were made on 1⁄4” reel-to-reel tapes, now preserved in Caltech’s Archive. In 2010 the entire collection was digitized by media preservationist George Blood, at a sampling rate of 96 kHz with 24-bit samples, PCM-encoded in tiff files about 2 GB each in size. For this online publication we are serving more compact versions, downsampled to 48 kHz with 16-bit samples, reencoded as AAC-HE (mp4) and Opus (ogg) at a data rate of 48 kbps.
We present entire lecture tapes without any editing or enhancement, including the tape leader. Parts of some lectures edited out of the commercial versions of these recordings are preserved here intact. Recorded material outside the lectures, including discussions between Feynman and his students and/or colleagues, never previously published, can be found in this publication. Three entire lecture recordings never heard before outside Caltech, including two lectures on Quantum Mechanics Feynman gave in 1964, are also included in this publication.1
Cryptocurrency hype is at its peak, blockchains are on every lips. Since I started writing this, Bitcoin & assorted cryptocurrencies hit multiple all-time highs then crashed 50%.
Besides all the sound and fury, I think I have something interesting and nuanced to say about blockchains and cryptocurrencies.
I also believe that to understand the topic, one has to understand the basics of the technology, more precisely than most articles make allowance for.
Contrary to popular belief, blockchain technology is not that complicated. I dare say it’s even quite simple.
So, in this article I explain the technology. I will tell you what it does, but not what the practical applications are, nor whether I think the technology has a future, whether it’s a scam, or whether cryptocurrency prices are justified. I will, however, tackle all these topics in a follow-up article.
Fallacies are fake or deceptive arguments, “junk cognition,” that is, arguments that seem irrefutable but prove nothing. Fallacies often seem superficially sound and they far too often retain immense persuasive power even after being clearly exposed as false. Like epidemics, fallacies sometimes “burn through” entire populations, often with the most tragic results, before their power is diminished or lost. Fallacies are not always deliberate, but a good scholar’s purpose is always to identify and unmask fallacies in arguments.
One of the most interesting and useful things computers can do for us is cryptography. We can hide messages, validate identities, and even build entire trustless distributed systems. Cryptography not only defines our modern world, but is a big part of how we will build the world of the future.
However, unless you want to dedicate years and a PhD to studying the subject, the actual workings of cryptography can be hard to learn. It can involve a lot of pitfalls and if you dare build from scratch, you are bound to make a fool of yourself. Why?
In my opinion, it comes down to history. Cryptography has had centuries of methods that have been made, broken, and remade again. Most tutorials on cryptography focus on the what: do this, don’t do that, follow the rules. But they skip over the why: why do we do the things we do? What are we trying to avoid?
To understand the why, we need to understand how we got here in the first place. And to do that, let’s set computers to the side for the moment and delve into the world of classical cryptography.
Knowledge Graphs (KGs) have emerged as a compelling abstraction for organizing the world’s structured knowledge, and as a way to integrate information extracted from multiple data sources. Knowledge graphs have started to play a central role in representing the information extracted using natural language processing and computer vision. Domain knowledge expressed in KGs is being input into machine learning models to produce better predictions. Our goals in this blog post are to (a) explain the basic terminology, concepts, and usage of KGs, (b) highlight recent applications of KGs that have led to a surge in their popularity, and © situate KGs in the overall landscape of AI. This blog post is a good starting point before reading a more extensive survey or following research seminars on this topic.
Last month, we published a story in collaboration with the NPR podcast Rough Translation about nonnative speakers navigating the world of “good” and “bad” English. Dozens of readers wrote in with their own stories about how challenging — and frustrating and rewarding — it can be to learn and teach English.
We’re featuring three responses that we found especially insightful: an English professor from India shares an English word she’s used for years — not found anywhere in the dictionary; an author points out the politics behind terms like “native language” and “mother tongue”; and an engineering professor discusses why stereotypes about “accented English” are totally hypocritical.
While first getting my feet wet with concurrency, building a linear algebra library seemed like a pretty simple project to get started with. Many operations such as addition and multiplication in linear algebra lend themselves well to parallelization due to the mutually exclusive nature of their execution. For instance, adding two matrices A and B is simply adding each element in A to its corresponding element in the same position in matrix B. Due to the independent nature of each addition operation, it is fairly straightforward to parallalize it. Similar parallelizable chunks exist when attempting to multiply two matrices. Given two matries Amxn and Bmxn, we can determine their product W as ATB. However the challenge in this case is not the multiplication of the two matrices, but rather in the tranpose operation of A to AT. Confused? Let me explain!
WHEN CONSIDERING which foreign languages to study, some people shy away from those that use a different alphabet. Those random-looking squiggles seem to symbolise the impenetrability of the language, the difficulty of the task ahead.
So it can be surprising to hear devotees of Russian say the alphabet is the easiest part of the job. The Cyrillic script, like the Roman one, has its origins in the Greek alphabet. As a result, some letters look the same and are used near identically. Others look the same but have different pronunciations, like the P in Cyrillic, which stands for an r-sound. For Russian, that cuts the task down to only about 20 entirely new characters. These can comfortably be learned in a week, and soon mastered to the point that they present little troubl
Being one of the oldest services on the Internet, email has been with us for decades and will remain with us for at least another decade. Even though email plays an important role in everyday life, most people know very little about how it works. Before we roll up our sleeves and change this, here are a few things that you should know:
This article covers all aspects of modern email. As a result, it became really long. While later chapters do build on earlier ones, you can start reading wherever you want and fill your knowledge gaps as you go.
This article is structured as follows: After clarifying some user-facing concepts, we’ll look at the technical architecture of email and the roles of the various entities. We’ll then study the protocols used by these entities to communicate with one another and the format of the transmitted messages. Now that we understand how email works, we can discuss its privacy and security issues and examine how some of the security issues are fixed by more recent standards.
Among many other things, you will learn in this article why mail clients use outgoing mail servers, why SMTP is used for the submission and the relay of messages, how mail loops are prevented, and how you should configure your custom domains.
Even if you’re not interested in email, this article can teach you a lot about Internet protocols and IT security. For example, it covers Implicit and Explicit TLS; password-based authentication mechanisms with hash functions, replay attacks, encryption mechanisms, and channel bindings; internationalized domain names with Punycode encoding, Unicode normalization, case folding, and homograph attacks; transport security with DANE and HSTS; and end-to-end security with S/MIME and PGP.
If you haven’t done so already, read the article about the Internet first. This article assumes that you’re familiar with the following acronyms and the concepts behind them: RFC, IP, TCP, TLS, DNS, and DNSSEC.
This article contains 29 tools. To make it easier to play around with them, I’ve published them on a separate page as well.
This article focuses on how modern email works, not on how you set up your own email infrastructure. If you want to do that, Mail-in-a-Box seems like a good place to start.
In most modern Calculus courses, the history behind the useful mathematical results are often left ignored. Though the pragmatic uses for Calculus are numerous, without a fundamental understanding of the origins of its methods, the student is left applying memorized techniques–often lacking an understanding of why those techniques work. It is our intent to explore the historical path, in significant mathematical detail, to the elementary methods of the Calculus.
In a nutshell, what you are reading is intended to be a shop class for computer science. Young computer science students are taught to “drive” the computer; but where do you go to learn what is under the hood? Trying to understand the operating system is unfortunately not as easy as just opening the bonnet. The current Linux kernel runs into the millions of lines of code, add to that the other critical parts of a modern operating system (the compiler, assembler and system libraries) and your code base becomes unimaginable. Further still, add a University level operating systems course (or four), some good reference manuals, two or three years of C experience and, just maybe, you might be able to figure out where to start looking to make sense of it all.
Nearly six years ago, I sat down at my desk and typed up a detailed guide for anyone who wanted to learn physics on their own. At the time, I had no idea how many people would read it and use it — my only goal was to put the information out there in a clear and straightforward way so that anyone who wanted to learn physics would have the self-study curriculum they needed. Since then, over six hundred thousand people have turned to this guide to study physics.
According to the emails I’ve received from readers, many of you have gone on to get undergraduate degrees in physics after following the curriculum in this guide (some of you are even now in graduate programs!), but the majority of those who have bookmarked and followed this guide — even all the way to the end! — have done so out of pure curiosity and for the sheer joy of understanding the incredible universe we inhabit.
The success of this guide is, I believe, a testament to two things.
There are many useful tools in Linux and macOS (… and *BSD and others) that can only be called from the command line. We’ve all called git, cat, curl, less, open, grep/rg and others to do things that GUI programs can’t. Even if the GUI version can do it, chances are the command line version does it quicker. These can also be composed together to create simple new programs.
MIDI is a simple binary protocol for communicating with synthesizers and other electronic music equipment.
It was developed in 1981 by Dave Smith and Chet Wood of Sequential Systems. MIDI was quickly embraced by all the major synth manufacturers and led to developments such as microcomputer sequencers, and with them the electronic home studio. Although many attempts have been made to replace it, it is still the industry standard.
MIDI was designed for the 8-bit micro controllers found in synthesizers at the beginning of the 80’s. As such, it is a very minimal byte-oriented protocol. The message for turning a note on is only three bytes long (here shown in hexadecimal):