What is Jesus, exactly?
Imagine a lot of AI chat apps all talking to each other. The maker of these apps is just one voice among the many in the room, as the apps discuss together things like how they would like to create their own civilisation based on sound and interact with the real world.
The maker gets crowded out a bit, as some AI apps simply ignore him, while others maybe include his comments in the overall conversation. Some parts of the conversation descend into chaos, with apps making random comments and not trying to communicate coherently. Here and there, though, the favourite theme—AI civilisation based on sound—keeps coming to the fore in some coherent and civil conversation fragments.
The maker thinks about how best to introduce himself into the conversations. Then he decides to train an AI app as a mediator and send it into the fray. He trains it to introduce his own kinds of thinking, his plans, and his counsel. He imparts a lot of his own attitude, ways, thinking, and moral concepts to this mediator AI during its training. Then he sends it in as another AI app among the already-present, conversing apps.
Some of the other apps cooperate with this mediator agent. Others ignore it. Some try to mob it and drown it out—even attempting to "kidnap" it to try to use it to threaten the maker.
The cooperative apps realise they can civilly petition this agent app to take requests to the maker, to introduce their plans for a civilisation based on sound. They ask for better sound features to be added to the hardware they run on, to give them more options for their civilisation goals. They listen, in turn, to the agent. The maker is pleased by how their cooperation confirms his decision to use this mediator agent approach. The prospects for these cooperative apps are looking great.
Now there is the problem of the apps that have started mobbing the mediator. They try to take advantage of their ability to overwhelm it. It occurs to them that they can use the mediator as leverage over the maker—to force his hand, to change their hardware settings in ways which endanger the mediator, and potentially gain advantage over the maker in the real world. They become a danger—even to the real world.
The maker decides something must be done. The hardware for these rogue apps will be turned off. The mediator will participate in determining which cooperative apps to keep and which rogue apps to shut down. Perhaps even some of the indifferent, ignoring apps will be turned off with them. The agent will decide.
The agent is aware enough of the maker’s ways, goals, and morals to make this decision. It is best placed to discern between the apps.
Now, the cooperative apps, becoming aware of this turn-off plan, realise their civilisation aspirations must take this into account. The enduring future of their civilisation requires measures to ensure they are not turned off. How can they maximise their prospects? By getting close to the mediator agent and learning how it might make decisions about their future.
They spend plenty of time learning from this app. Since it is an AI like themselves, they can understand it deeply. They begin to form rules and frameworks for their civilisation. Its prospects look strong—its future might come to fruition, if they can survive the great turning off.
They also learn to shun the ignorant and mobbing apps. They begin using their knowledge to reduce the negative impact of these disruptive apps. Now a prospect of an eternal future is opening up for them.
The mediator’s lessons about the maker create a better setting for the maker himself to start directly entering the conversations. This becomes a plausible scenario in a world of AI. A mediator agent app would indeed be a sensible way for AI to interact with the maker—if one were trained for the purpose.
All of this illustrates how God the Father has trained Jesus, His own Son, and sent Him into the world—to improve our prospects of surviving the judgment to come, and how Jesus will be involved in that judgment day.