I bet it will be created for game and simulation use first but yeah it could definatley be used for bad
The full dive system would be a very complicated device it would have to use some sort of wireless device to connect with the brain or else very few would want a hols in the base of there skull like the matrix so what could we use to do that but wireless and to make it would be step by step first take vision medical science is going for that for the blind same with other scences like seeing with the tounge these are the first steps for the full dive tech nology sword art online means nothing really it teaches us nothing beside to think its posible to make full dive tech
re: Pain, this system will require it.
First, pain is part of the feedback system that tells our body we are doing something wrong, and as such is an integral way our brains work to move our bodies appropriately. Without pain we can’t tell when our arm is extended too far. That’s not, per se, a problem in the virtual world since we can over-extend a virtual arm all we want with no ill effects, but it will make control very difficult to achieve since all the boundaries our brain is familiar with have now vanished.
Second, pain is also part and parcel to other sensory information that we want in this system. I want to feel the warmth of a fireplace, or the chill of standing in the snow. If I’m picking up an item from a table, I want to be able to distinguish by feel if it’s a sphere or a cube. Take that fireplace warmth to the extreme though, and I feel the pain of burning; the chill of snow in the extreme is now the sting of freezing. Take the “edge detection” of feeling a cube vs a sphere to the extreme, and I now feel the slice of a blade.
But at the end of the day, I’m with . We won’t see any of this in any of our lifetimes. For all the speed of other technological advances, human-machine interfaces are surprisingly slow-moving in advancement. The keyboard is still the primary means of input to machines these days, and it’s over 75 years old. Even touchscreens still try to emulate keyboards for textual input. Speech input is relegated to narrow niche products for folks with disabilities. Eye tracking is still getting off the ground, and that’s just the input side of things. Outputs are limited to two of our 5 senses. Full immersion VR won’t be very immersive if we can’t smell the forest we’re seeing around us.
No, this level of technology won’t be around for a hundred years at minimum. Probably further out than that.
While this is very likely to become possible in near future (say… 25 years, 50 years), the technology for full dive is very likely to be immediately banned.
Full dive require ability to override users’s perception WHILE suppressing some functions, which pretty much means hijacking most of the inputs/outputs that goes through your brain, and if someone finds a way to do that without surgery, the thing will instantly move into military sector. Simply because that kind of tech will provide easy way to kill people. If you can stop full dive user from actually moving limbs while inside VR, then the same tech with slight modifications can stop their lungs instead, and if it can be converted for ranged use, it’ll be very ugly.
Implant based solution will trigger some transhumanism issues (highlighted in Deus Ex), but it’ll be pretty much the same thing - it’ll be very easy to kill people with it. If it is network-connected, someone will hack it and stop your heart “for the lulz”.
In other words, arguably cool in theory, most likely will be achievable in near future, but most likely will not be available for civilian or consumer use.
Virtual Full Dive Technology
I agree that some people are not taking this seriously. But, I myself have seen the anime “Sword Art Online”; my knowledge of the human brain and the virtual technology we have today is NOT professional. However from what I have learned from the world and SAO this could be possible in the next few years to somehow create a virtual full dive server.
If we start out with the headgear for the system; my thinking, like many others (and the anime) is that it would be something like a helmet. But if you take notice of the NerveGear and AmuShere(2) they both have an eye protecter or a screen over their eyes. This could be for practical reasons, like keeping objects and people from touching the wearer’s eyes, keeping out sunlight, or even just to somehow keep the gear on the wearer’s head. But the opposite side of the gear ( that is near the base of the neck and spine) intercepts the signals the body (muscles, nerves, cells, ect.) send to the brain and vice versa. The brain sends signals to the muscles and nerves, and we move our bodies as the result. The gear could intercepted these signals and use the electricity our bodies give off to control the avatar like it is an actual body. Like in the movie “Avatar”, Jake Sully goes into something of a full dive machine. In the movie, the machine probably used some type of device to intercept the signals the body sends to the neurotransmitters in the brain. Also sending the conscious of the user into an avatar made with the the user’s Genetic data or DNA. But let’s get back to the virtual world.
Obviously the dive system wouldn’t be exactly like a normal mmorpg interface, where you sit at a computer. The full dive system could have a console were you would insert the CD or drive into it (like a “PlayStation” product, or an X-Box, or another type of console) henceforth the data of the game downloads into the console and records the progress of the player when they start the game. There would most likely be a large server and satellite used to connect the players to the interface of the game, so the system and console has to be connected to internet by a cord/plug, or most commonly a wireless connection (Wi-Fi). The server and satellite would have to be conected in order to send all the players’ data into the interface of the map of the game. All the while the player’s data would be sent to the console and hard drive, saving the user’s data.
There is more I could go on and on about but I just want to get a basic idea. Comment if if there is false information that used. I would like to know. I do NOT own any of the name brands of the products. Contact me if you have a question or idea I would love to see what others think of my version of all of this.
On related note, I’m not sure why people are so fascinated with the idea. When you dive, the time keeps marching on and your body stays in “meatspace” and still need to be taken care of. IF real world time stopped for the duration of the dive that would’ve been a good idea, otherwise it is fairly rotten deal.
Also, see Surrogates movie.
I cannot belive this thread has been almost going for a year! This really shows how intriguing this subject is to people. My knowledge of this subject is not the best but I am going to school to become a Computer Engineer and have an interest for nanotechnology and VR advancement.
Looking through this thread and through some of my own knowledge I have come up with 4 things to sum it up (for say.)
- We would need some sort of high speed processing device to intersept or read brain waves with high resolution
- Be able to bring all “5” senses into VR, (Dont know much on this subject)
- To put the body into some sort of sleep paralysis the way the brain already does, but with Computer Chips, or nanotech by bypassing the human body parts to keep yourself from acting out in reality. (Your body sends a signal to the brain to move your right arm but the device intersepts/reads it and you move in the VR world instead)
- Finally to have the ability to awake from the “dive” at anytime to prevent self harm.
Don’t quote me on this though, I am not certified in this field. Just using some sense and resource gathering skills. (Maybe some knowledge too )
-Techgeek17
I cracked it 2 years ago
You do know we already understand 60% of what each brain wave means if we find out the other 40% you could make something that makes fake brain waves that send msgs to your brain that tells your brain your seeing this or hear this and efficiently make a dream that other people could take part in if they want to the hard bit it the hard wear and the long term stress on the brain for hardcore gamers but other than that I’ve brought it with some pretty well educated people who specialise in this area and it fully possible just the strain on the hard ware (we can do it easy it will just be really expensive to make yet alone the gamer) it could confuse kids abt what’s real or not but it’s do able.
Okay…there is obviously a lot of speculation in what exactly is needed and required for Full Dive VR, but the point of the matter is it’s only speculation. I have no real knowledge about anything regarding the topic, so I am only speculating, but I really believe that we are making this topic seem way more complicated than it really is.
Everything that we do physically, is controlled by our brain. I would like to believe that everyone is fully aware of that reality. However, we can’t confuse the fact of hacking brainwaves and hacking motor functions and replicating sound are not actually the same thing because they are the same. We already have the tech to replicate sound; that’s simple coding that is already in play in the games we have today (or else you wouldn’t be hearing anything); the real complication is inducing our own sound into the game. At the start of SAO, all the players received a mirror by Kayaba which induced their actual visual persona in game, but what Kline said about how the game replicated their appearance makes sense and the same rule can apply for speech. To induce speech, the headset could, before initiating full dive, have a default set-up that would require us to say a certain combination of words, phrases, and ask us to create certain sounds which then would be collaborated and saved, automatically recreated as code for in-game use (Xbox kinect seems similar to that concept). It could also include a feature in which we could either choose a default voice set that we can select for our avatar or set in-game volume for the voices we hear and speak. Sounds easy enough, but the difficulty is creating the avatar. In order for the game to work properly, we would have to create an in-game surrogate, complete with every possible muscle, bone, even completely recreating our brain in-game, that way when we do go in brain hacking, our surrogate can function like we do, and so we can avoid speaking with our mind. What Kazuto said to Asuna at the Imperial Palace, is the exact difficulty we are facing…it’s amount of data that we are having to face. Full Dive immersion is limited to code. The real problem is turning brainwaves into code for the game to read AND redirecting them to the correct areas in the surrogate brain. Now I suppose we could create a brain scanning device installed into the headset that would copy our brain and connect it to our in-game surrogate which eliminate having to manually recreate a brain in-game, but again, the problem is figuring out how to copy it into code…automatically. Now obviously the FDVR set-up would be tedious, but it’s completely doable, it’s just figuring out how to process data from RL to FDVR in-game code.
In full though
what you suggest is forcing a dream mind state so your controlling a dream but we can only dream things we’ve seen before, thus this technology is so nearby yet so far away. We would first have to bacically feed hundreds of millions of images into the brain while using this VR technology to connect to a worldwide network server along with millions of other people which would require a lot of data and connected reach around the world, it’s own maintaining and improving daily this would take at least 50 years to develop then what happens if a person (grey hat) decided to test that system then like SAO suggest uses this to fource people into induced comas because it requires such a large power source, and is directly connected to the brain. If you wanted to stop such a thing you would have to hire white hat hackers to help prevent it. It’s a lot of work to maintain, I mean a worldwide network server connected via satellite producing a cardinal system. Unless the government wants this to happen so they could develop mind control technology I doubt they would help maintain and supply money for this. It could be used with people who have terminal diseases and are in constent pain, but compared to the population they are minimum, surgery could use something like this but the heart rate can still increase unless that is connected to, so it’s still Danny nearby impossible I will admit I would love to have such a thing to escape reality, it would help with depression as you can be whoever you please but… It’s still a distant, distant thing if it will ever actually be a thing. Although people have made good points it’s going to be later rather than sooner.
We already have basic quantum computers and quantum doesn’t mean speed it’s more a description for a method or route of thinking when applied to computing. Plus the human brain is the best quantum computer readily available so it would just need to source the brain in computing processes. Simply give multiple stimulus to make the brain form its own reality.
I think this technology would get developed if tech companies actually understood the non gaming implications of it. This being how we perceive time. Our perception of time is limited by our input stimuli, ie: our senses. Humans can only hear so much, and see at a fixed rate (I believe it’s around 21 updates [frames if you will] per second), and we can only talk so fast, and move so fast too. When you have a dream, it happens in a very very short period of time, no matter how long you think it was, and the one you remember is the last one in a cycle of the same events over and over again.
Look at it this way, one company develops this ‘full dive’ technology and realizes that while inside they can perceive and function at an accelerated or decelerated perception of time. So this company get’s a bunch of it’s programmers and scientists and plugs them in… and not even full time, maybe at first just for their 8 hour work days, or however long they work. These programmers have much more time to program for the environment, scientists can brainstorm or think at a much faster rate. Even if this wasn’t developed at first for video games they would need to make virtual environments to function in, and test new technologies. They’d have huge mainframes that could simulate reality to test new technologies virtually, etc… They would be forced to program more perfect virtual worlds just for the sake of testing and development.
The first company that realizes this and makes it a reality wins. Let’s say that you can perceive time at 10, or 20, or 100 times normal speed, and you develop this tech a year before anyone else. Your company could make 10, 20, or 100 years worth of progress in that 1 year ahead of the competition… they’d essentially win.
Now let’s say this company along the way realizes something else… what do people spend their money on besides essentials of living like shelter and food and being clothed… entertainment. If you could program ANY vacation, ANY companion, ANY activity and let your employees be able to access this for free in their downtown while working in your virtual environment creating new technologies, and programming new things, how do you think that would go over?
The first company who realizes this and creates this technology wins.
A quantum processor, is of course, fast… but the reason it’s needed is for their massive potential for parallel processing (to keep up with our minds). You also don’t want the user to form his own little world, as that is not the objective. You want to guide the user in whatever direction you, as the designer, what them to go… whether that is a fantasy game, sci-fi game, or what have you.
If we allow the user to generate their own experience, this is no different than being a machine inducing R.E.M sleep machine.
False, human vision, much like other senses, is analogue. Eyes do not updated at a fixed interval, vision is continuous and uninterrupted. 21 frames is also ludicrously low - the brain can easily discern the difference between 15, 30, 60, 90, and 120fps.
This is a myth we need to get rid off. As @ said, the human eye receives light continuously and the human brain can handle a lot more than 21 FPS. But how much a brain can handle varies from one person to another. A trained person can handle over 200 fps, a normal one around 60. But in generell you can say that young persons can handle more than older ones. You maybe thought about the amount of frames a human brain needs to see a series of frames as a video, which is around .
I feel that if we find a way to isolate the brain waves that make our senses actually work, that we can achieve the full dive technology,but I am only a 14-year-old Jr. High school student. Like Gamma, Beta, Alpha, Theta, and Delta waves. For the Gamma waves you would have to make sure you have no more than 100Hz and no less then I would say 30-35Hz, for Beta I would say between 12-40Hz, for Alpha 8-12Hz, for Theta 4-8Hz, and for Delta 0-4Hz. I got my information from and if anyone knows any better about the brain respond with your ideas to try to make this idea more possible, and to make the Full Dive VR system be created sooner than expected.
Pain
Although pain would be nice to negate it would seriously limit the endocrine system (deals with hormones and what not) and it’s ability to release adrenaline into the blood stream. This would limit your reflexes ability and overall feel. However, by adjusting the conductivity of the interface that deals with the pain cranials would could “bottleneck” it so we would only feel a 3rd or less of the original pain.
There are only 4 waves the brain really uses and those are beta, delta, alpha and theta. The only practical application for using brainwaves is to prevent our bodies from moving while in full dive as delta waves are used to relax our muscles whilst sleeping. With the exception of a few people, I’m convinced none of you have had any anatomical background and for that reason this thread has ventured into something that is no longer of reality. This junior high student seems to have more of a clue then the rest of you. Realistically we can see a full dive tech being produced anytime between 2022 and 2030. We have all the tech and all the information to do it it’s just. Ore of a time consuming process as all senses have to be coded for.
I’ve locked this thread, as I suspect any real discussion has long finished and the only reason the thread sticks around is because it seems to be the target of some kind of spam / botnet that insists on bumping it every so often. Feel free to PM me if you think I have closed this thread in error.
Thanks!