I don't think anyone serious thinks or talks in terms of AGI. The feverishly simplistic idea of the singularity is quite silly.
Most notably, neural networks alone will not reach any kind of AGI.
Start adding the capacity to read from massive knowledge stores, and a place to keep long term information (i.e., memory, probably also in a database), plus a feedback loop for the model to learn and improve? Plus the ability to call APIs? Now you're talking. I think all of those pieces are close to doable right now, maybe with a latency of 5s. If one of the big players puts that in place in a way that is well measured and they can iterate on, I think we'll start to see some really incredible advances.
The gpt_index project looks very promising in this area.
"At its core, GPT Index is about:
1. loading in external data (@NotionHQ, @Slack, .txt, etc.) 2. Building indices over that data 3. Inputting a prompt -> getting an output!"
Interesting. How are these indexes stored and how are they fed into the transformer model so that GPT can use them? Does this require an additional training step?
For answering "queries", it appears like it iterates over the documents in the store, i.e., NOT using it like an index, and feeding each document as part of the context into the LLM.