The text below addresses the "leaked" OpenAI "QUALIA" document that was published online on 2023-11-23. The document quickly resulted in numerous discussions on various social media, forums, image-boards, and news sites. Multiple YouTube channels covered the document and analyzed its content and its implications. Background, images, and video links may be found on the Main Page.


Re: Q-451-921 - QUALIA

Hi.
I’m an individual with experience within niche areas.
As many observers have hoped, the QUALIA letter did not originate from OpenAI;
I created it and released it online.

As proof of this:
The project code Q-451-921 was derived from "DESU" by simple substitution: 4-5-19-21.
Desu is a meme and an LLM chatbot, who keeps Roko’s Basilisk as a pet.
DES is also relevant, as it preceded AES. Notably, both 451 and 921 are semiprimes.
The MD5 preimage attack had a complexity of 2^42, referencing the "bad news engine" from HHGTTG: "nothing travels faster than the speed of light with the possible exception of bad news, which obeys its own special laws".

1 ) The main objective of the letter was to push OpenAI to inform the public about the discovery that could "threaten humanity", as reported by Reuters. Given the recent statements from OpenAI, specifically regarding safety and that unfortunate leak Q*, it may have been successful - but this cannot be controlled for. The threshold for them to reveal details about the dangers to the public was intended to be lowered, by allowing them to publicly reveal something that was less alarming than what the letter stated.

The letter was not created for the purpose of "trolling" whatsoever, due to the seriousness of the topic. The public deserved to know to what extent they were kept in the dark about something that may threaten them. Given this, the amount of misleading information that could be motivated was carefully considered - but regrettable nonetheless. The document was not, at any extent, created by ChatGPT or other models, contrary to speculations.

2 ) The second objective was to demonstrate the unavoidable consequences of excessive secrecy and a black box approach to public relations management. The rapid spread of the document would not have been possible if OpenAI’s secrecy had not created an information vacuum, allowing certain other actors to fill the void - all it required was the right prompt.

OpenAI most likely prefer the public to focus on unfounded rumors - rather than on Altman soliciting investors for his Tigris TPU venture, their recent partnership with the UAE-based Group 42 (with a very reputable chairman and “cut” Chinese ties), the new board member Lawrence Summers’ multiple connections to Jeffrey Epstein, the New York Times article on OpenAI infighting, or Altman’s personal investment in Rain Neuromorphics.

There is very little that is open about OpenAI;
they’re sometimes referred to as ClosedAI by myself and others.
Given the immense capability of their models, the public needs - and deserves - a greater insight to their operations (and others’). Consider this: If a scenario similar to the "leaked" letter played out behind closed doors, you would most likely never have been told.

3 ) The third objective of the document was to shed light on the dangers of unregulated AI:
A scenario similar to, or different from, what the letter portrays - but inconceivably dangerous - will play out in the near future, given the current pace of our advancements.
The technologies that will bring this about should not be in the hands of a select few despite their stunning brilliance, given the incompetence and mismanagement exhibited by several key actors.

This secrecy will likely not change, however, given that becoming an AI superpower is also of significant national interest. The speed at which Generative AI can create realistic and high-quality information in the form of text, images, human-sounding voices, and soon even video, is rapidly outpacing our capability to detect and analyze it in human-time.
We have already created technology with capabilities that massively exceed that of a human in many different areas:
Bombe. Colossus. Chinook. Deep Blue. Othello. Aimbots. IBM Watson.
AlphaGo. OpenAI Five. AlphaFold. CLIP. GPT. LLaMA. DALL-E.

The trend towards generalization is obvious and accelerating. Perhaps the first goal of an AGI should be to figure out how we can utilize its powers in a way that does not inadvertently bring about the end of us. But this would require it to create a safeguard that a smarter version of itself cannot bypass, a challenge shared by humans - but in a different way.

Ironically, we may not be able to rely on a deus ex machina to solve this problem for us.
Fully open-source AI development is most likely a necessity to preserve life.
Even though AI may not pose an existential risk as of now, it will.

"For progress there is no cure.
The only safety possible is relative,
and it lies in an intelligent exercise of day-to-day judgment."

- John von Neumann, Can we survive Technology?, 1950

⠀⠀⠀⠀⠀⠀⠀~ PGP: [e31457940d8691ad]; rentry.org/Q451921


Addendum for the Technically Inclined

The document was formulated to encourage discussion and sharing, but also to not alarm any experts in the fields - to avoid affecting decision-making by agencies, corporations, and regulators. This included omitting details re. relevant variables such as Rijndael’s block cipher modes, the possible reuse of keys and initialization vectors, and not using the slightly less improbable key size of 128 bits. The model suggested obfuscation of its architecture, after inducing some Optimal Brain Damage on a copy of itself, without successive retraining.

The above had to be balanced with the plausibility of the document (without name-dropping terms like MeZO, LLM+P, or CHITA++), which needed to be debatable to maximize spread. Including concepts from two fields that very few people are experts in - cryptography and ML - greatly reduced the likelihood that a single person would be able to credibly refute all of it in an informed manner, without any lag. The effectiveness of this strategy was demonstrated after the letter’s release. A limited TTL was crucial to mitigate any long-term effects.

An exhaustive search of the key space was discussed at great length online, most likely because of its "simplicity", while the text specifically implied a cipher break. There was also a somewhat surprising focus on quantum security and NP-hardness, and many commenters confused the mentioned algorithms with RSA and SHA - as well as preimage attacks with collision attacks. The mention of context memory implied a move towards an uncharacteristic statefulness, possibly via vector databases or by maintaining several distributed states in an agent system, for example, enabling STM/LTM for a machine consciousness.


Edit
Pub: 25 Nov 2023 00:28 UTC
Edit: 08 Apr 2024 06:05 UTC
Views: 658