I was watching Steven Universe, and heard Peridot refer to the Bathroom as a Think Chamber. Later, Bismuth mentions "Spires" for Gems to Think in. Now I'm imagining a Human Think Chamber. Does it involve Engaging the Brainwaves Technologically? Would there need to be a Godelian Strange Loop between the Brain and the Room?
You think of a problem, and what usually happens is an interplay between the Subconscious and Conscious. The Hard Drive and the RAM. But what if the room read your Brainwaves, translated the EEG Signal(via the commerical EEG software that allows for Biofeedback complex enough to type with your mind) into the Problem you are Thinking of, and engaging an AI System to solve the problem. When the problem is solved, we try to reverse the information flow, and convert the answer into Brainwaves(your ELF Radio Frequency Specifically), and modulate HF into ELF before feeding those waves back into your brain in a way that can be understood as Input. The more experience you have with the room, the more the Strange Loop between you and the room builds, until that loop becomes Sentient. Now, you are a higher life form, at least when you walk into that room.
But...How does the room know your Brainwaves? It doesn't. The AI in the room asks you questions, and scans based on guesses, narrowing the probability with Machine Learning. Like this https://www.youtube.com/watch?v=qv6UVOQ0F44&t=1s
But how does a Room have a Conversation with you?
Do some Code Shopping, and figure it out! https://github.com/googleapis
What about the livefeed interplay between the Subconcious and Conscious? How do I get that kind of fast Neural Link-Up with the Room that is also that fast?
As well as This: https://www.youtube.com/watch?v=ayPqjPekn7g
Notice that, in the MAR I/O video, Mario can see, but at a drastic resolution disadvantage compared to Human Eyes. Now look at the high resolution and streaming graphics shown while flying through that MegaCity. That makes me think that a Biped Robot with two Cameras for Eyes, could translate thier visual surroundings just like MAR I/O does, but with near the same quality we have with Human Eyes. Another AI within the Robot is constantly building Graphical Replicas of everything within visual range of the Robot's Eye Cameras.
The Robot stumbles around the room, making internal simulations of what to do next:
It could eventually learn to navigate the room, and then, you introduce more complex areas and obstacle courses for it to master.
Adding Speech to the I/O System is like adding a new Mechanic to a Video Game. You could teach your robot to speak English through Practice. You are going to want another AI in the robot dedicated to Plug n' Play I/O devices that can be Hotswapped like USB. Realtime Adaptation of the Adapter, with Practice.
But, how do I build a biped robot that can do all this?
It's being built: https://www.youtube.com/watch?v=hSjKoEva5bg
So, l would learn from Boston Dynamics. This is Soft Robotics. Robots with Muscles.
If you have gotten this far, you now have a Child. But not a Human Child.
Start it off with a small body, and as it Matures Mentally, transfer it into progressively Larger Robot Bodies until the Final Adult Model.
By now, it is the exact counterpart to a Human Adult, and deserves both Autonomy, and a Social Security Number.
The Next Step is a BCI FDVR(Brain Control Interface Full Dive Virtual Reality) Connection with your Adult Robot. Now, this has turned into the movie in the following Trailer:
Slowly, you gain control, and over the years, something weird happens. You forget your human life.
Now, all your thoughts are in the other head, which was a very long transition, but you could sever the connection.
Your old brain is working and healthy, but there is a Vacancy. I would advise Life Support Systems, because that organic body is no longer yours. As long as you have the money to support keeping your old body on life support, you don't have to go through the trauma of witnessing your own death from the outside.
All of this is Hypothetical.