How AI is revolutionizing coding
When someone once commented on the African influences evident in his paintings, Pablo Picasso said: “Good artists copy, great artists steal”. Programmers practice a similar type of predation. And with a good search engine, well, there’s a lot to spy on. Almost all programming problems have already been solved by someone, somewhere, at some point – and if a programmer can find that solution, they can save a lot of effort, or at least study the work of others at the same time. continuation of theirs. answers.
It is this promiscuity of resources that separates inexperienced young programmers from those, like me, who have used code most of their lives. A few years ago, a study showed that the most productive programmers were the ones who consulted Google the most frequently – and Stack Overflow, which is kind of Wikipedia of great code examples. Nowadays, I code with a whole bunch of open browser tabs – in Google, Stack Overflow, documentation pages, etc. It’s like having an extra brain – well, millions more brains. This definitely makes programming less tiring than it could have been 20 years ago, when programmers spent a lot of their time solving problems that others had already solved – but had no idea what to do with it. others had solved them.
These practices that programmers learn throughout their lives with their tools have been significantly bypassed by one of the latest advances in artificial intelligence. At the end of June, the programming site GitHub – a “repository” used by millions of programmers to store and share their code with their teams, or, in the case of open source, with the world – announced a new tool: Copilot. Developed from a very rich artificial intelligence model known as the OpenAI Codex, it claims to do what good programmers have always done for themselves: find the best piece of code to solve the problem at hand.
This sounds like a pretty big request, because the code can be difficult to read and understand for any programmer who hasn’t written it, and because the whole act of coding – establishing a structure, then filling that structure with the details needed – seems like a very human cognitive task. It was never something a programmer could ask Siri, Alexa, or Google Assistant to do for them. In terms of AI, this is the next level.
When I recently gained access to GitHub Copilot – it’s still in a limited version – I had very low expectations. I figured it would do little more than do a few clues as I sat down and typed in my code. What I got, however, was much more than that. From the examples provided by GitHub, I could see that I could formulate what I needed as a “comment” – a piece of code designed to be read by a human and ignored by the computer. So I wrote a single line of comment that spelled out my needs and – abracadabra! – the computer provided a complete solution.
I inspected the code rotated by Copilot and realized that the computer had given me exactly the result I was looking for. Just what I would have done – if not exactly the same. Coding styles differ among programmers and Copilot has its own very simple style. This makes it easy to read – always a good thing.
Still, after some time playing with it, I could see that Copilot is far from perfect. I asked him that the code do something pretty straightforward, and while he did come up with some suggestions that would have done the job, none of them made sense in the context of the programming language that I am. was using. I may need to learn to “talk” to Copilot in a language he can understand, just as we’ve all learned to voice our requests to Siri, Google, and Alexa. And there is no doubt that he feels astonishing so that the computer quickly and silently produces just the snippet of code you are looking for. Copilot will make bad programmers better and good programmers more productive. This is a good thing. But that’s just the tip of the iceberg.
Copilot sits atop OpenAI Codex, which in turn is based on something known as GPT-3 – the third iteration of a “Generative Pre-trained Transformer,” an artificial intelligence program that has been “taught” 175 billion “parameters” (think of them as rules) about human language. All of this data has come from a huge vacuum cleaner of the web – GPT-3 has sucked up much of what we’ve posted online over the past 30 years. With so much in it, and so many rules deduced from all that vacuuming, GPT-3 could do things that no computer program had done before, like writing a summary of a tech article, find the highlights in a press release – or even “read between the lines” public reports from business leaders to discern the true health of the business. There is no magic here, no “thinking”, but these billions of rules make it seem like GPT-3 can do things only humans can do. Mo rules, mo smarts.
But GPT-3 is over 18 months old – dog years in the fast-paced world of artificial intelligence. Earlier this month, Microsoft and Nvidia (which make the expensive display cards preferred by die-hard gamers) unveiled the latest, biggest, and biggest program, MT-NLG. MT-NLG has more than half a trillion parameters in his model of human speech – three times the number offered by the suddenly lousy GPT-3. What’s in it for you? Extensive inference powers. In one example, Microsoft provided a statement to MT-NLG and then asked it a question:
Statement: Famous professors supported the secretary.
Question: The teachers supported the secretary. True or false?
MT-NLG replied: True. The secretary was assisted by famous professors.
This is all the more significant as MT-NLG has shown its work, explaining Why he answered as he had done.
Much of what we read consists of factual statements. MT-NLG, like GPT-3 before it, can digest these factual statements, draw inferences, and then make decisions based on those inferences. Is it “to understand”? The question we must ask ourselves at this point is not unanswered: “Does MT-NLG understand what he is reading?” Rather, “Does MT-NLG have enough rules to give the correct answer?” The answer is – most of the time – yes.
Microsoft recently bought GitHub; The co-pilot and the MT-NLG appear to be on a collision course. This means that the types of suggestions that Copilot provides to programmers will soon be even better. Will this put programmers out of work? It seems unlikely. Instead, programmers will be able to focus on the things of interest, using Copilot to provide workable solutions to all the necessary “gateways” within every computer program.
In the same way that the steam engine relieved human beings of the mechanical drudgery of work, artificial intelligence appears to be fulfilling its promise of relieving the drudgery of office work. It won’t be long before we have Copilot-like tools used for basic business communication – writing simple press releases, responding to customer emails, and more. Automation happens in the boring parts of our white-collar lives, leaving us with the interesting parts – the weird and the unexpected, things no computer has ever learned, and no human has ever seen. . Humans are good at dealing with exceptional circumstances – and with the help of AI, we’ll have more time to improve.
This piece was produced in collaboration with cosmosmagazine.com.
This article first appeared in the print edition of The Saturday Paper on December 4, 2021 under the title “Cerebral codex”.
A free press is a paid press. In the short term, the economic fallout from the coronavirus has taken about a third of our income. We will survive this crisis, but we need the support of the readers. Now is the time to subscribe.