Folder: HUMAN PARANORMAL CAPABILITIES; STAR GATE was an umbrella term for the Intelligence Community effort that used remote viewers who claimed to use clairvoyance, precognition, or telepathy to acquire and describe information about targets that were blocked from ordinary perception. The records include documentation of remote viewing sessions, training, internal memoranda, foreign assessments, and program reviews. The STAR GATE program was also called SCANATE, GONDOLA WISH, DRAGOON ABSORB, GRILL FLAME, CENTER LANE, SUN STREAK. Files were released through CREST and obtained as TIF files by the Black Vault and converted to PDF by That 1 Archive.
Governments, policy makers, corporate institutions, et al, have failed to respond to decades long warnings from scientists that CO2 emissions from industrial and domestic activities pose serious risks to human life and human society, to the world’s ecosystems and perhaps ultimately to much of life on Earth. Those scientists, conservationists and activists who have understood this, have nevertheless failed to effect the change necessary to prevent an ecological and climate emergency. There are complex reasons for these failures, and though it is vitally important that we try to fully understand them, I will not speak to them here.
I want to focus on the urgent question ‘what do we do now?’ by considering the response emerging from the new and quickly growing environmental mobilizations such as Extinction Rebellion in which people are beginning to resort to techniques of disruption and civil disobedience in the face of governmental and systemic inaction. Are these measures necessary, are they are morally justified, and are they perhaps even morally required?
We all know that our cell phones constantly give our location away to our mobile network operators; that’s how they work. A group of researchers has figured out a way to fix that. “Pretty Good Phone Privacy” (PGPP) protects both user identity and user location using the existing cellular networks. It protects users from fake cell phone towers (IMSI-catchers) and surveillance by cell providers.
It’s a clever system. The players are the user, a traditional mobile network operator (MNO) like AT&T or Verizon, and a new mobile virtual network operator (MVNO). MVNOs aren’t new. They’re intermediaries like Cricket and Boost.
MTProto 2.0is a suite of cryptographic protocols for instant messaging at the core of the popular Telegram messenger application, which is currently used by more than 400 millions of people.In this paper we analyse MTProto 2.0 using ProVerif, a symbolic cryptographic protocol verifier based on the Dolev-Yao model. In particular, we provide a fully automated proof of the soundness of MTProto 2.0’s authentication, normal chat, end-to-end encrypted chat, and re-keying mechanisms with respect to several security properties, including authentication, integrity, confidentiality and perfect forward secrecy. To prove these results we proceed in a modular way: each protocol is examined in isolation, relying only on the guarantees provided by the previous ones and the robustness of the basic cryptographic primitives.Our research proves the formal correctness of MTProto 2.0, and it can serve as a reference for implementation and analysis of clients and servers. Moreover, we isolate the aspects of cryptographic primitives that require further investigation, in order to deem this protocol suite definitely secure.
Web browsers are inherently trusted by users. They are trained to trust websites which “have a padlock in the address bar” and that “have the correct name”, This trust leads to users feeling comfortable entering their sensitive data into these websites. From an attackers stand point this trust is an amazing thing, as once you have compromised a users workstation there is a process (with close to zero protections) handling a relatively large amount of sensitive data while being used a great deal by a user. Throw in password managers with browser extensions and you have a natural target for red teams. So naturally when I found myself with some time to spend on a research project, I decided to spend it abusing this trust!
Below you will find a collection of CIA related UFO records. The Black Vault’s connection to the CIA in getting some of these UFO documents released goes back to 1996.
Originally, the CIA would only release about 1,000 pages that had been previously disclosed after a FOIA court case in the 1980s. They never addressed the records that were dated in the years after the case.
The Black Vault spent years fighting for them, and many were released in the late 1990s. However, over time, the CIA made a CD-ROM collection of UFO documents, which encompassed the original records, along with the ones that took years to fight for.
In an effort to make sure The Black Vault stayed up to date, in mid 2020, this CD-ROM was purchased to have one particular data dump available for all users of The Black Vault. You will find this below for download in its original state, along with a converted/searchable .pdf format. (Although the CIA claims this is their “entire” collection, there may be no way to entirely verify that. Research by The Black Vault will continue to see if there are additional documents still uncovered within the CIA’s holdings.)
This is the story of a bug that was discovered and fixed in Telegram’s self-rolled cryptographic protocol about sever years ago. The bug didn’t get any press, and no one seems to know about it, probably because it was only published in Russian.
To this day, it’s the most backdoor-looking bug I’ve ever seen.
Borg Backup is a encrypted, compressed, deduplicated backup program for multiple platforms including Linux. This combined with the NixOS options for configuring Borg Backup allows you to backup on a schedule and restore from those backups when you need to.
Borg Backup works with local files, remote servers and there are even cloud hosts that specialize in hosting your backups. In this post we will cover how to set up a backup job on a server using BorgBase’s free tier to host the backup files.
OpenSSL is one of the most crucial library on a Unix system: it performs cryptographic functions and it provides Transport Layer Security (TLS) and Secure Sockets Layer (SSL) protocols to applications.
According to the Arch Linux OpenSSL package, 355 packages, out of the 11523 available, depend on it. You can find it installed on any Unix system (and on Windows too!).
It started in 1998 as a fork of SSLeay and it has been in development since. Two full-time developers work on it, as well as many volunteers.
Fast forward to 2014: a CVE had been issued regarding a high risk vulnerability found in OpenSSL. It has been given the name Heartbleed, because it has been found in the TLS/DTLS heartbeat extension (RFC6520). This vulnerability allowed attackers to steal sensitive data, such as secret keys, user names and passwords.
All the community turned to the OpenSSL project, weighting its implementation and security policy. Heartbleed have been promptly fixed, but another vulnerability there could be new vulnerabilities in the future, if security was not properly prioritized during development.
At this point, OpenBSD’s folks forked OpenSSL and started a new project: LibreSSL. It primary goals were to modernize the codebase and to improve its security. This new project hasn’t been adopted by big distributions such Ubuntu and Arch Linux; instead smaller distributions (at that time) replaced OpenSSL with LibreSSL on their default configuration, such as Alpine and Void.
In the last years, LibreSSL have seen a decline in its usage. Alpine switched back to OpenSSL (link to the thread). Many people and distributions are considering doing the same, since OpenSSL got the improvement that LibreSSL aimed for. And it’s still the de facto standard cryptographic library on Unix.
The NSA has just declassified and released a redacted version of Military Cryptanalytics, Part III, by Lambros D. Callimahos, October 1977.
Parts I and II, by Lambros D. Callimahos and William F. Friedman, were released decades ago — I believe repeatedly, in increasingly unredacted form — and published by the late Wayne Griswold Barker’s Agean Park Press. I own them in hardcover.
Like Parts I and II, Part III is primarily concerned with pre-computer ciphers. At this point, the document only has historical interest. If there is any lesson for today, it’s that modern cryptanalysis is possible primarily because people make mistakes
The monograph a while to become public. The cover page says that the initial FOIA request was made in July 2012: eight and a half years ago.
And there’s more books to come.
This first goal of this blog post is to serve as a comprehensive reference of Samsung RKP’s inner workings. It enables anyone to start poking at this obscure code that is executing at a high privilege level on their device. Our explanations are often accompanied by snippets of decompiled code that you can feel free to skip over.
The second goal, and maybe of interest to more people, is to reveal a now-fixed vulnerability that allows getting code execution at EL2 in Samsung RKP. It is a good example of a simple mistake that compromises platform security as the exploit consists of a single call that allows getting hypervisor memory writable at EL1.
In the first part, we will talk briefly about Samsung’s kernel mitigations (that would probably deserve a blog post of their own). In the second part, we will explain how to get your hand on the RKP binary for your device.
In the third part, we will start taking apart the hypervisor framework that supports RKP on the Exynos devices, before digging into the internals of RKP in the fourth part. We will detail how it is started, how it processes the kernel pages tables, how it protects sensitive data structures, and finally how it enables the kernel mitigations.
In the fifth and last part, we will reveal the vulnerability, the one-liner exploit and take a look at the patch.
This blog article uses the fantastic research from the authors of uncaptcha2 repository. The original scientific uncaptcha paper proposes a method to solves Google’s Audio reCAPTCHA with Google’s own Speech-to-Text API.
Yes you read that correctly: It is possible to solve the Audio version of reCAPTCHA v2 with Google’s own Speech-to-Text API.
Even worse: reCAPTCHA v2 is still used in the new reCAPTCHA v3 as a fall-back mechanism.
Static engines became a standard in automatic detection for large enterprises thanks to their accurate and quick detection. When looking at detection of email attacks on enterprise organizations, they make 91% of all cyber attack attempts. If an attacker can find a way to infiltrate the organization’s defenses, the organization will become compromised the potential damage can be of millions of dollars.
Today, most email file attachments are documents, such as Microsoft Office Word or Excel documents and more (docx, xlsx, xlsm, etc…). Most new Microsoft Office files have a special structure but in reality they are just a ZIP compressed file containing the document media structure and text.
Because all of these files are candidates for attack vectors on organizations, detection engines must be able to parse these files correctly to detect exploitation attempts and attacks on enterprises.
In this blog post I will cover ways to exploit security vendors static detection engines, focusing on ZIP file format, the ways it is parsed, how can different parsers fail to detect malicious contents, and the ways attackers can potentially bypass detection engines and infiltrate the organization. Finally, I will provide an in the wild example of attack exploiting such techniques.
While uploading pirated content has always been illegal, the new law is quite specific in that it criminalizes the downloading of unlicensed content. While that could take place in a simultaneous upload environment such as BitTorrent, it seems most likely that people will obtain content from websites instead.
That presents some roadblocks to enforcement so we asked Ina how, from a technical perspective, will the authorities track, obtain evidence, and prosecute people who simply download content (comics, movies, music etc) to their machines but don’t distribute?
“The authorities shall use digital forensic technologies to track suspects’ activities and collect evidence. The details of such technologies have not been publicly available,” he explained.
“There are certain special units specialized in cyber crimes in each prefecture. For example, the Tokyo Metropolitan Police has its own Cyber Crime Control Unit. But the police do not investigate unless the person commits the crime repeatedly, intentionally and maliciously, i.e. innocent light downloaders shall not be prosecuted.”
Nmap’s decoy scan is one of my favourite features regarding this tool: it allows us to specify additional IP addresses that will show up in IDS logs as fake scanning hosts. It is a really effective technique to harden discovery of the original address that issued the scan. The syntax looks as follows:
nmap -D <host_1>,<host_2>,<host_N>… <target_host>
Each host is separated with a comma and passed after
The only drawback of this method is that each decoy host should be up and running to prevent SYN flooding of the target that is being scanned. Additionally, specifying each address by hand isn’t really time-efficient. That being said, we will try to perform the following:
Discover active hosts on current LAN
Specify discovered hosts with Nmap’s
-Doption in terminal
Let’s get started :>
In early December, we discovered a new, undetected worm written in Golang. This worm continues the popular 2020 trend of multi-platform malware developed in Golang.
The worm attempts to spread across the network in order to run XMRig Miner on a large scale. The malware targets both Windows and Linux servers and can easily maneuver from one platform to the other. It targets public facing services; MySQL, Tomcat admin panel and Jenkins that have weak passwords. In an older version, the worm has also attempted to exploit WebLogic’s latest vulnerability: CVE-2020-14882.
During our analysis, the attacker kept updating the worm on the Command and Control (C&C) server, indicating that it’s active and might be targeting additional weak configured services in future updates.