This post introduces how one can start reverse engineering UEFI-based BIOS modules. Taking Absolute as an example, this post serves as a tutorial of BIOS module reverse engineering with free tools and approachable steps for beginners.
This post is not to explain how to disable or discover issues in Absolute.
The general method of browser render process exploit is: after exploiting the vulnerability to obtain user mode arbitrary memory read/write primitive, the vtable of DOM/js object is tampered to hijack the code execution flow. Then VirtualProtect is called by ROP chain to modify the shellcode memory to PAGE_EXECUTE_READWRITE, and the code execution flow is jumped to shellcode by ROP chain finally. After Windows 8.1, Microsoft introduced CFG (Control Flow Guard) mitigation to verify the indirect function call, which mitigates the exploitation of tampering with vtable to get code execution.
However, the confrontation is not end. Some new methods to bypass CFG mitigation have emerged. For example, in chakra/jscript9, the code execution flow is hijacked by tampering with the function return address on the stack; in v8, WebAssembly with executable memory property is used to execute shellcode. In December 2020, Microsoft introduced CET(Control-flow Enforcement Technology) mitigation technology based on Intel Tiger Lake CPU in Windows 10 20H1, which protects the exploitation of tampering with the function return address on the stack. Therefore, how to bypass CFG in a CET mitigation environment has become a new problem for vulnerability exploitation.
Abuse of collaboration applications is not a new phenomenon and dates back to the early days of the internet. As new platforms and applications gain in popularity, attackers often develop ways to use them to achieve their mission objectives. Communications platforms like Telegram, Signal, WhatsApp and others have been abused over the past several years to spread malware, used for command and control communications, and otherwise leveraged for nefarious purposes.
CVE-2021-26708 is assigned to five race condition bugs in the virtual socket implementation of the Linux kernel. I discovered and fixed them in January 2021. In this article I describe how to exploit them for local privilege escalation on Fedora 33 Server for x86_64, bypassing SMEP and SMAP. Today I gave a talk at Zer0Con 2021 on this topic (slides).
I like this exploit. The race condition can be leveraged for very limited memory corruption, which I gradually turn into arbitrary read/write of kernel memory, and ultimately full power over the system. That’s why I titled this article “Four Bytes of Power.”
BleedingTooth is a set of zero-click vulnerabilities in the Linux Bluetooth subsystem that can allow an unauthenticated remote attacker in short distance to execute arbitrary code with kernel privileges on vulnerable devices.
I found and reported this vulnerability with @ginkoid.
This was actually the first report that paid out for me on HackerOne. At $35,000, it’s also the highest bounty I’ve received so far from HackerOne (and I believe the highest GitHub has paid out to date).
A lot of bugs seem to be a mix of both luck and intuition. In this blog post, I’ll illustrate my thought processes in approaching such a target.
In this article we will discuss a interesting way to fingerprint Tor relays using JARM and build a small application to fingerprint relays.
Once upon a time, a government auditor insisted to me that keystroke loggers had to run as root, otherwise they would not function properly. So, I wrote a keystroke logger that ran as a normal user and showed it to him.
He wasn’t amused. He said that I was violating government IT policy by demonstrating the program to him.
Some time later, another auditor was adamant that I would not be able to copy files from his secure enclave computers onto the Internet. He said that he had strong network security measures in place. So, I wrote another small program to copy files from his enclave computers onto the Internet.
He wasn’t amused either, but was far more appreciative when I showed him how it worked.
I’ve previously blogged about the possible backdoor threat to curl. This post might be a little repeat but also a refresh and renewed take on the subject several years later, in the shadow of the recent PHP backdoor commits of March 28, 2021. Nowadays, “supply chain attacks” is a hot topic.
Since you didn’t read that PHP link: an unknown project outsider managed to push a commit into the PHP master source code repository with a change (made to look as if done by two project regulars) that obviously inserted a backdoor that could execute custom code when a client tickled a modified server the right way.
Passwords are a terrible technology for logging-in to websites, and I hope that one day they can be done away with. For now though, the best tactic is to generate random passwords and use a password manager.
A password manager uses one master password, which you remember, to decrypt or release the individual passwords for each service you’re signed up to. These individual “passwords” can be long, randomly generated strings.
I think it is a mistake to use a consumer router. The big reason is that their security is not acceptable.
I say this fully aware that my opinion runs counter to every article you will ever read about buying a router. Consumer routers are marketed, and reviewed in the tech press, based on speed, features, speed, price, speed, appearance and speed. Security never factors into the equation. These are, to me, the wrong priorities.
Three years ago, Spectre changed the way we think about security boundaries on the web. It quickly became clear that flaws in modern processors undermined the guarantees that web browsers could make about preventing data leaks between applications. As a result, web browser vendors have been continuously collaborating on approaches intended to harden the platform at scale. Nevertheless, this class of attacks still remains a concern and requires web developers to deploy application-level mitigations.
By sharing our findings with the security community, we aim to give web application owners a better understanding of the impact Spectre vulnerabilities can have on the security of their users’ data. Finally, this post describes the protections available to web authors and best practices for enabling them in web applications, based on our experience across Google.
I’m writing this post because I often hear that kernel exploitation is intimidating or difficult to learn. As a result, I’ve decided to start a series of basic bugs and exercises to get you started!
A couple of weeks ago, I read a post about how the “sealed system” on Big Sur was hurting people. I kind of skimmed through it and figured it was mostly complaining about the size of the download. For whatever reason, that hadn’t been a problem for me and my machines, so I kind of wrote it off.
Microarchitectural attacks on computing systems often stem from simple artefacts in the underlying architecture. In this paper, we focus on the Return AddressStack (RAS), a small hardware stack present in modern processors to reduce thebranch miss penalty by storing the return addresses of each function call. The
RASis useful to handle specifically the branch predictions for the
RETinstructions which are not accurately predicted by the typical branch prediction units. In particular,we envisage a spy process who crafts an overflow condition in the
RASby filling it with arbitrary return addresses, and wrestles with a concurrent process to establish a timing side channel between them. We call this attack principle, RASSLE(Return Address Stack based Side-channel Leakage), which an adversary can launch on modern processors by first reverse engineering the
RASusing a generic methodology exploiting the established timing channel.
There is always a certain amount of tension between the goals of those using older, less-popular architectures and the goals of projects targeting more mainstream users and systems. In many ways, our community has been spoiled by the number of architectures supported by GCC, but a lot of new software is not being written in C—and existing software is migrating away from it. The Rust language is often the choice these days for both new and existing code bases, but it is built with LLVM, which supports fewer architectures than GCC supports—and Linux runs on. So the question that arises is how much these older, non-Rusty architectures should be able to hold back future development; the answer, in several places now, has been “not much”.
The latest issue came up on the Gentoo development mailing list; Michał Górny noted that the Python cryptography library has started replacing some of its C code with Rust, which is now required to build the library. Since the Gentoo Portage package manager indirectly depends on cryptography, “we will probably have to entirely drop support for architectures that are not supported by Rust”. He listed five architectures that are not supported by upstream Rust (alpha, hppa, ia64, m68k, and s390) and an additional five that are supported but do not have Gentoo Rust packages (mips, 32-bit ppc, sparc, s390x, and riscv).
After watching “The Imitation Game”, the post-World War II US government realised that cryptography was a big military deal. They immediately and ham-fistedly regulated the export of cryptographic technology on grounds of national security.
SSL, the encryption protocol that protects all HTTPS connections everywhere on the internet, was developed in the 1990s at Netscape. Regulation still required that RSA keys used to encrypt traffic travelling out of the US be limited to 512 bits, the intent of which was to ensure that the government could still tractably spy on foreign communication at will. SSL was designed in such a way that it could support multiple different ways of generating and authenticating a “shared secret” (more on this later), one of which was the deliberately hamstrung, 512-bit “Export RSA”.
The privacy threats of online tracking have garnered considerable attention in recent years from researchers and practitioners alike. This has resulted in users becoming more privacy-cautious and browser vendors gradually adopting countermeasures to mitigate certain forms of cookie-based and cookie-less tracking. Nonetheless, the complexity and feature-rich nature of modern browsers often lead to the deployment of seemingly innocuous functionality that can be readily abused by adversaries. In this paper we introduce a novel tracking mechanism that misuses a simple yet ubiquitous browser feature: favicons. In more detail, a website can track users across browsing sessions by storing a tracking identifier as a set of entries in the browser’s dedicated favicon cache, where each entry corresponds to a specific subdomain. In subsequent user visits the website can reconstruct the identifier by observing which favicons are requested by the browser while the user is automatically and rapidly redirected through a series of subdomains. More importantly, the caching of favicons in modern browsers exhibits several unique characteristics that render this tracking vector particularly powerful, as it is persistent (not affected by users clearing their browser data), non-destructive (reconstructing the identifier in subsequent visits does not alter the existing combination of cached entries), and even crosses the isolation of the incognito mode. We experimentally evaluate several aspects of our attack, and present a series of optimization techniques that render our attack practical. We find that combining our favicon-based tracking technique with immutable browser-fingerprinting attributes that do not change over time allows a website to reconstruct a 32-bit tracking identifier in 2 seconds. Furthermore,our attack works in all major browsers that use a favicon cache, including Chrome and Safari. Due to the severity of our attack we propose changes to browsers’ favicon caching behavior that can prevent this form of tracking, and have disclosed our findings to browser vendors who are currently exploring appropriate mitigation strategies.
In this article we will explore ‘Apport’, the Ubuntu crash handler. When an application crashes Apport is executed by the kernel, reads information about the crashed process, and then creates a crash report that can be sent to Ubuntu developers.
We will show how we were able to bypass several defense mechanisms, manipulate the crash handler, and get local privilege escalation.
We must first begin with a brief introduction to the hardware platform. Skip this if you have read the awsome material available on the web about the Intel architecture, I’ll try to briefly summarize it here.
The Intel platform is based on one or two chips. Small systems have one, the desktop and server ones are separated to a CPU complex and a PCH complex (PCH = Platform Controller Hub).
In this post, we disclose a severe vulnerability in Windows Defender that allows attackers to escalate privileges from a non-administrator user. Windows Defender is deeply integrated into the Windows operating system and is installed by default on every Windows machine (more than 1 billion devices).
Privileged services on Windows or in Windows components may contain bugs that enable malicious escalation of privileges. Attackers often use such vulnerabilities to carry out sophisticated attacks. Security products ensure device security and are supposed to prevent such attacks from happening, but what if the security product itself introduces a vulnerability? Who’s protecting the protectors?
Microsoft patched the vulnerability in Windows Defender and released a fix on February 9th. Prior to the fix, the vulnerability had remained undiscovered for 12 years, probably due to the nature of how this specific mechanism is activated.
SentinelOne isn’t aware of any indication that this flaw has been exploited in the wild; nevertheless, all SentinelOne customers are protected from this vulnerability.
It’s not every day that I get to tell a story about a type of technology that was recently in the news for macabre reasons, but then again, it’s not often that water supplies get hacked using screen sharing technology, which is something that literally happened last week in Florida. At the center of the debate is a tool that is perhaps at the peak of its cultural importance, but that naturally comes with inherent security concerns: The remote viewer application, which is incredibly useful when, y’know, people are remote, as is quite common right now. But it comes with complications, as seen in a certain community’s water treatment system. Today’s Tedium discusses the history of remote viewers, which go back further than you might guess.