The new AI is able to write computer codes, they may not be perfect but it is learning. Neural networks seem to do anything these days, from Siri to self-driving cars, big data is here to stay
From what started as one-trick ponies, neural nets, a Markov chain generated a gobbledygook whitepaper accepted as a non-reviewed paper to the 2005 WMSCI conference. While it managed to generate the Kafkaesque glory of “Garkov”, another one was able to create trippy images where everything looked like an eyeball or a cat.
We can train AI to produce prose that’s an almost natural language that humans find difficult to identify as a man or machine who wrote it. Earlier last year OpenAI Labs GPT-3 model was able to blog, Tweet, and argue. The developers trained it to use part of the Common Crawl dataset, it included Wikipedia and a whole ton of books, among other subsets of prose and code. That’s not all, CommonCrawl can also index GitHub.
The GPT-3 model was also able to learn prose when exposed to bundles of Common Crawl data. Using the osmosis it was able to produce snippets of intelligible computer code.
Looking at the progress the OpenAI team developed another version of the GPT-3 model, named it Codex. It was trained on a truly colossal set of prose from the Common Crawl and computer code from GitHub and elsewhere. It is in fact already in use by GitHub, where it uses the power of an intelligent code-suggestion tool named Copilot. The OpenAI Codex is a fluent AI, it can take a natural-language prompt as input, and generate code for the task it was given.
New Algorithm Beats Humans in Drone Racing for The First Time
New Computer Vision and Deep Learning Approaches for Improving Degraded Images
Autonomous Aircraft by Xwing Loaded with Autonomous flight technology