(For Japanese version -> here)
Superception is a research framework that takes an engineered approach to transforming and augmenting human perception and cognition using computer technology. The word superception is a portmanteau of super (a condition beyond the ordinary, a grouping beyond the individual, meta-) and perception. My belief is that superception can change the contours that defines the sense of self, allowing us to enhance our perception, understand others better, and consciously control our unconscious mind. Ever since creating this research framework, I have been conducting research into the augmentation of human senses and perception [1]. And now, in 2021, I’ve reached a watershed moment.
I believe that superception research will become the foundation of how human beings perceive and understand the self with regard to the various relationships between our mind and body. Strangely enough, in 2021 I have seen a watershed moment for my research framework, the huge societal changes wrought by the coronavirus, and the development of an important project, happening all at once. At this rare confluence, here again, I would like to describe what “superception” aims to achieve.
Update the “assumptions” that society had
The changes in society caused by the COVID-19 are far too great to be described here. At the level of both society and the individual, countless things were lost. However, it was a year where we humans displayed impressive grit, boldly overcoming physical barriers through the use of technology, including the dramatic increase in telework, online communication and virtual events. It seems like that we have been performing a massive experiment at the societal level.
As for me, when a technology demo/workshop I was supposed to give in person at a physical venue was changed to an online event [2], audience participation actually increased. And since seminars and lectures were now all online, I had more opportunities to talk than ever before. Almost all international conferences were moved to online and archived, which is actually convenient; with the extensive use of online tools, I eventually found that research discussions can be more lively.
Think about how the mere idea of remote work overturns the huge assumption we once had that “you could only be somewhere if you were physically there.” Nowadays if someone says, “Let’s have a meeting,” the default assumption is that it will be held online. Of course online meeting software did exist before, but changes to social rituals (even if they are semi-forced) inevitably change our assumptions. Nowadays, if you say “I am there” it is no longer assumed you are physically there, only that the “information of you” is there…and that is enough. At first, changes like this are not evenly distributed, but then the pace of adoption accelerates—it is plain to see that ways of life we had simply “assumed” are now being revised every few months.
Integrating Humans and Computers
I believe that remote work is just one part of the transformation of assumptions we have about our behaviors which will lead to major transformations in the relationship between the human mind and the human body. One of these is the integration of humans and computers. An assumption we have held until now is that a human being acts as a discrete system, making judgments and taking actions based on its own perceptions. The term “human-computer interface” was coined to describe technology that creates a dialogue between a human and a computer. So, what does it look like when we overturn our old assumptions, and create a new relationship between human and computer?
As with many other researchers, I see great potential in this “human-computer integration”—the forging of a mutual connection between human and computer.[3] Human-computer integration can involve not only a person implanted with some sort of smart equipment, but also computer support or augmentation of physical actions or movements, or even one’s body represented as a different body, in a virtual space—essentially, anything where humans and physical computers are mutually integrated for some specific purpose. You could think of this as a development of “Man-Computer Symbiosis,” the concept of symbiotic information management proposed by J. C. R. Licklider in 1960.[4] In human-computer integration, human and computer become an integrated agent; you could say it involves a “embodied symbiosis” of human and computer.
The Self in Human-Computer Integration
Using the idea of human-computer integration as our starting point leads us to technological possibilities like the assistance or augmentation of human physical actions and decision-making, and enhancing human abilities. However, in order for true integration to be achieved, we must consider “To what extent am I myself?” from the point of users. Even if we create a system with abilities beyond human, and even if it allows you to use a brand new body, if you don’t perceive or feel that you are “you” when you use it, then the technology has not truly augmented human being.
So, in terms of our perception/awareness, how does the sense that the self is indeed the self arise? S. Gallagher put forward the idea that we can divide the self into two basic parts: the narrative self and the minimal self. [5] The narrative self “involves personal identity and continuity across time,” while the minimal self is “devoid of temporal extension”—it is created here and now, moment by moment. Focusing on this idea that human beings are not immutable, and bringing it into our model of the human-computer relationship, I think a computational mediation of the “minimal self”, constantly updated in real time, will be a one of new form of human-computer integration.
Two Elements of the Self
In recent years, I myself have done research that focuses on two elements of the minimal self. These are sense of agency—the perception/awareness that “I am the one who carried out this action”—and sense of body ownership—the perception/awareness that “this is my body.”
In our daily lives, we do not walk around with a conscious feeling of ownership over our bodies, that “this is my body.” Sometimes when you lose your sense of touch, perhaps because a body part has briefly gone numb, you might experience a temporary loss of this feeling of ownership; this has been demonstrated in special environments, such as the “rubber band illusion,” an experiment that involved tactile sensations and a rubber hand.[6]
However, with the advent of virtual reality (VR) and the things it makes possible—physicality in an imaginary space, controlling a robotic body remotely—I expect that we will see more and more ways to transmit our physical actions into things that are not our bodies. In other words, our previous assumptions about what our body is will be overturned in countless ways, and we will experience feelings of “my body“ with non our physical bodies. In the future, if we can feel a sense of body ownership beyond our own body, in what ways will that change human senses? I myself have actually been working with the Yamaguchi Center for Arts and Media (YCAM) to research how human senses change through “spatial-temporal deformation of virtual human bodies.”[7]
Another significant factor is the sense of agency, the perception/awareness that “the one who carried out this action is me.” This is a foundation for maintaining a firm sense of self, at every level of human behavior. Whether we are typing on our smartphone, driving our car, or tossing a stone in a pond, the reactions to our actions are what make us feel that we exist.
Integration with Superhuman Computers
When we integrate a human with “superhuman” computers, does that person feel of the sense of agency? Existing computer-assist technology is based on the assumption it should take its lead from human action. You could describe these systems as computers that help humans by observing human actions, predicting what they’ll do next, and then presenting information, offering physical assistance, etc. However, when human and computer become one integrated agent, the old assumption no longer applies. Assistance now means a state of being where human beings’ original abilities are augmented and surpassed. When that happens, won’t we feel the sense of agency over these superhuman abilities?
As an example, my research team investigated how we attribute own super-human abilities to own-selves by means of electrical muscle stimulation (EMS). Using EMS on subjects’ arms to directly actuate the body, the subjects achieved superhuman reaction time in a game like a simple visual reaction game.[8] As part of our results, we found that in certain temporal condition, the subjects reported the sense of agency (“ ‘I’ performed the action”) even when it was the electrical muscle stimulation driving the faster hand movement, not their own voluntary action. [9]
We showed that in the future, we can design systems of human-computer integration where humans feel a sense of agency, even if the computer is the direct driver of human action, and what is allowing us to perform superhuman actions. We are currently undertaking various research projects in this vein. [10, 11] This series of studies attempts to show the ways how human (consciousness) and computers can take joint ownership of the human body. In sum, we are researching a relationship between mind and body which moves past the assumption that “my mind is what moves my body.”
A Flexible Relationship Between Mind and Body
I believe that if we are going to achieve fusions of human and computer as well as the telework and VR applications that are being accelerated by technological development and societal demand—whether that means human-computer integration, or remote-controlled robots, or avatars in virtual spaces—it will be ever more important to move past this prior assumption about how our mind is what moves our body, and properly designing a flexible relationship between body and mind. I see Superception as a technology that allows us to reach a flexible relationship between mind and body.
n Minds, m Bodies
We should not assume that the relationship between mind and body is always one-to-one. In the future, I expect we will see one person using multiple bodies, and multiple people operating a single body. You could call these relationships between “n minds” and “m bodies.” In fact, studies of multiple people sharing control of a single virtual body [12, 13] and basic research into virtual, multiple-body ownership [14] are gradually beginning to appear.
A research project of mine that preceded these studies was done in 2015 with YCAM; Project Parallel Eyes[15], a “parallel first-person view sharing experience.” The results were exhibited by the Sony Wow Factory & Wow Studio at South by Southwest (SXSW) [16, 17], and that showing has made a big impact on the project’s current R&D progress.
To design the flexible relationship between mind and body, including the concept of n minds/m bodies, I think, three aspects of “parallelization”: awareness, action, and attribution, will be the important design principles. The first element, awareness, refers to paralleled environmental awareness, including perception and recognition, across multiple bodies. The second element, action, refers to the paralleled operation of multiple bodies by the mind of a single person. And the third element, attribution, refers to the psychological and technical design that creates self-attribution—the sense that our actions and experiences do in fact to belong to us—in parallel with the above “paralleled” experiences. We are undertaking research on those three fronts into n-mind/m-body embodiment, and the self-perceptions that take place there.
Joining a Research & Development Program
One of the goals of the Moonshot Research & Development Program, an initiative led by Japan’s Cabinet Office, is “the realization of a society in which human beings can be free from limitations of body, brain, space, and time.”[18] Their idea is that cybernetic avatars will help realize a flexible relationship between mind and body, with the ultimate goal of creating a sustainable society (Society 5.0) where our current social issues are resolved and diverse ways of life are fostered. Surprisingly, this idea is rapidly becoming a reality in response to current demands for social change.
One moonshot research project I have been involved with since its proposal in early 2020 has received approval: “Development of Cybernetic Avatars to Create Shared-Experience with Harmonious Physical and Social Characteristics” (Project Manager: Prof. Kota Minamizawa, Keio University). I will work alongside the research team to move things forward, and the plan is that I will be dealing with the physical parallelization and integration generated by flexible relationships between mind and body.[19] The “cybernetic avatar” concept includes not only robotic bodies and 3D-graphic avatars, but also ICT and robotics technology that “augments physical, cognitive and perceptional capabilities.”
As of February 2021, we are preparing to recruit research assistants (RA, part-time) and project researchers for this research project. If you have any questions or inquiries, please feel free to contact us at superception.lab[at]gmail.com.
(this article is the modified version of internal report)
References
[1] Superception : www.sonycsl.co.jp/tokyo/3918/
[2] Fragment Shadow workshop :www.sonycsl.co.jp/news/10236/, www.youtube.com/watch?v=-UB1H2MAsvQ
[3] Florian Floyd Mueller, Pedro Lopes, et al., Next Steps for Human-Computer Integration. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20) DOI:https://doi.org/10.1145/3313831.3376242
[4] Licklider, J. C. (1960). Man-computer symbiosis. IRE transactions on human factors in electronics, (1), 4-11.
[5] Gallagher, S. : “Philosophical conceptions of the self. Implications for cognitive science”, Trends in Cognitive Science, vol. 4, pp.14–21, 2000.
[6] Botvinick, M., and Cohen, J. : “Rubber hands ‘feel’ touch that eyes see,” Nature, 391,756,1998.
[7] Shunichi Kasahara, Keina Konno, et al., Malleable Embodiment: Changing Sense of Embodiment by Spatial- Temporal Deformation of Virtual Human Body. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI ’17) DOI:https://doi.org/10.1145/3025453.3025962, [project page]
[8] Jun Nishida, Shunichi Kasahara, and Kenji Suzuki. Wired muscle: generating faster kinesthetic reaction by inter-personally connecting muscles. In ACM SIGGRAPH 2017 Emerging Technologies (SIGGRAPH ’ 17). DOI:https://doi.org/10.1145/3084822.3084844 [project page]
[9] Shunichi Kasahara, Jun Nishida, and Pedro Lopes. 2019. Preemptive Action: Accelerating Human Reaction using Electrical Muscle Stimulation Without Compromising Agency. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’ 19). DOI:https://doi.org/10.1145/3290605.3300873 [project page]
[10] Daisuke Tajima, Jun Nishida, Pedro Lopes, Shunichi Kasahara, Successful Outcomes in a Stroop Test Modulate the Sense of Agency When the Human Response and the Preemptive Response Actuated by Electrical Muscle Stimulation are Aligned, DOI:https://doi.org/10.1167/jov.20.11.173
[11] Shunichi Kasahara, Kazuma Takada, Jun Nishida, Kazuhisa Shibata, Shinsuke Shimojo, Pedro Lopes, Preserving Agency During Electrical Muscle Stimulation Training Speeds up Reaction Time Directly After Removing EMS. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI ’ 21). DOI:doi.org/10.1145/3411764.3445147
[12] Hagiwara, Takayoshi, Gowrishankar Ganesh, Maki Sugimoto, Masahiko Inami, and Michiteru Kitazaki. 2020. “Individuals Prioritize the Reach Straightness and Hand Jerk of a Shared Avatar over Their Own.” iScience, November, 101732.
[13] Fribourg, Rebecca, Nami Ogawa, Ludovic Hoyet, Ferran Argelaguet, Takuji Narumi, Michitaka Hirose, and Anatole Lecuyer. 2020. “Virtual Co-Embodiment: Evaluation of the Sense of Agency While Sharing the Control of a Virtual Body among Two Individuals.” IEEE TVCG. https://doi.org/10.1109/TVCG.2020.2999197.
[14] Guterstam, Arvid, Dennis E. O. Larsson, Joanna Szczotka, and H. Henrik Ehrsson. n.d. “Duplication of the Bodily Self: A Perceptual Illusion of Dual Full-Body Ownership and Dual Self-Location.” Royal Society Open Science 7 (12): 201911.
[15] Kasahara, Shunichi, Mitsuhito Ando, Kiyoshi Suganuma, and Jun Rekimoto. 2016. “Parallel Eyes: Exploring Human Capability and Behaviors with Paralleled First Person View Sharing.” In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 1561–72. CHI ’16. [project page]
[16] https://www.sonycsl.co.jp/news/3958/
[17] https://www.sonycsl.co.jp/news/7658/
[18] Moonshot Goal 1: Realization of a society in which human beings can be free from limitations of body, brain, space, and time by 2050. : https://www8.cao.go.jp/cstp/moonshot/sub1.html
[19] Moonshot Research & Development Program: https://www8.cao.go.jp/cstp/moonshot/project.html#a1