Shostack + Friends Blog

 

GPT-3

The OpenAI chatbot is shockingly improved — its capabilities deserve attention. Text from GPT3, claiming that terminators cannot take over the world in the same way that real machines or robots could.

This week, it’s been hard to avoid text from OpenAI’s GPT-3 text generator, which has gotten transformationally better over the last year. Last year, as I prepared for my OWASP Appsec keynote (25 Years in AppSec: Looking Back, Looking Forward), I was given early access, and gave it the prompt “In 25 years, application security will be...” and, after filtering through some answers, it gave me some ok bullet points. This year, it gave me...something quite different, and I inserted the text into my slides:

It is difficult to predict exactly how application security will evolve in 25 years, as it will likely depend on a variety ot factors such as critical issue and will require ongoing attention and investment. Some potential developments in the field of application security in the next 25 years could include the adoption of new technologies such as quantum computing, the development of more sophisticated security protocols, and the integration of artificial intelligence and machine learning into security systems. Additionally, it is likely that there will be an increasing emphasis on protecting user data and privacy in the digital world.

The impact of freely available text that’s reasonably convincing is something that OpenAI (and others) have been thinking about, but it’s now viscerally here. A few interesting longreads I’ve come across are:

Also, not a longread, but attributed to Andrew Feeney:

 @webber described ChatGPT as Mansplaining As A Service, and honestly I can't think of a better description. A service that instantly generates vaguely plausible sounding yet totally fabricated and baseless lectures at an instant with unflagging confidence in it's own correctness on any topic, without concern, regard or awareness even of the level of expertise of it's audience.