The current generation of LLMs uses natural language as an input/output. This is convenient (and impressive) for human interaction, but what about computer-to-computer communication?
Emulating wire formats
With a simple prompt, GPT-3 can easily be coerced into accepting and outputting JSON. You can even use the prompt to specify the schema for the API responses you want (you can simply give it a few example responses). This makes it easy to build traditional systems around model inference.
Maybe this solves ETL and the M:N API problem, to some degree. Fuzzy mappers can handle small unexpected changes in an API response. Of course, maybe this introduces more opportunities for hidden problems.
Encoders/decoders
In addition, GPT-3 is quite good at learning (or creating) encoders/decoders. This means it could plausibly generate good compression algorithms that match the data. For general-purpose models, this might not always give the right results – it would look something more like a probabilistic data structure like a bloom filter. But fine-tuned GPT-3 encoders and decoders might go a long way into being efficient ways to exchange data.
Emulating APIs
ChatGPT has been successful in hallucinating APIs,
- Emulating a Linux virtual machine
- NodeJS/Python/SQL interpreter
- A host of other command line tools
The problem, of course, is hallucination – i.e., there's no guarantee that the results are referentially correct. But it does post the question – if you have a fuzzy interface or a simple enough interface, you might be able to replace the backend with something like GPT-3.