GPT-3
The OpenAI chatbot is shockingly improved — its capabilities deserve attention.![Text from GPT3, claiming that terminators cannot take over the world in the same way that real machines or robots could.](/images/blog/img/2022/Terminators-800w.jpeg)
This week, it’s been hard to avoid text from OpenAI’s GPT-3 text generator, which has gotten transformationally better over the last year. Last year, as I prepared for my OWASP Appsec keynote (25 Years in AppSec: Looking Back, Looking Forward), I was given early access, and gave it the prompt “In 25 years, application security will be...” and, after filtering through some answers, it gave me some ok bullet points. This year, it gave me...something quite different, and I inserted the text into my slides:
![It is difficult to predict exactly how application security will evolve in 25 years, as it will likely depend on a variety ot factors such as critical issue and will require ongoing attention and investment. Some potential developments in the field of application security in the next 25 years could include the adoption of new technologies such as quantum computing, the development of more sophisticated security protocols, and the integration of artificial intelligence and machine learning into security systems. Additionally, it is likely that there will be an increasing emphasis on protecting user data and privacy in the digital world.](/blog/img/2022/GPT3-ai-appsec.png)
The impact of freely available text that’s reasonably convincing is something that OpenAI (and others) have been thinking about, but it’s now viscerally here. A few interesting longreads I’ve come across are:
- On Bullshit, And AI-Generated Prose (Clive Thompson, Medium)
- OpenAI's new ChatGPT bot: 10 dangerous things it's capable of (Ax Sharma, Bleeping Computer)
- OpenAI’s New Chatbot Will Tell You How to Shoplift And Make Explosives (Janus Rose, Vice)
- Talking About Large Language Models (Murray Shanahan, Arxiv) makes the point that Large Language Models (including GPT-3) are really answering the question “what's the most common next token after my prompt?” (Added Dec 11)
- ML and flooding the zone with crap (Greg Linden, Geeking With Greg) points out that the majority is not always right, especially when there’s an incentive to manipulate appearances. (Added Dec 11)
- Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods (Evan Crothers, Nathalie Japkowicz, Herna Viktor, Arxiv) an extensive survey, added Dec 11.
- A New Chat Bot Is a ‘Code Red’ for Google’s Search Business (Nico Grant, Cade Metz, New York Times) added Dec 22
- 11 Problems ChatGPT Can Solve For Reverse Engineers and Malware Analysts (Aleksandar Milenkoski & Phil Stokes, SentinelOne), but note that Jordan points out that “the diagram is very wrong...locals are too high (they go below the saved return address), saved registers are too low (they go above arguments), and who knows what's up with the padding bit it doesn't make sense at that location.”
- An A.I. Pioneer on What We Should Really Fear (David Marchese, NY Times) added Dec 28
Also, not a longread, but attributed to Andrew Feeney:
![@webber described ChatGPT as Mansplaining As A Service, and honestly I can't think of a better description. A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures at an instant with unflagging confidence in it's own correctness on any topic, without concern, regard or awareness even of the level of expertise of it's audience.](/blog/img/2022/mansplaining-as-a-service.png)