First person experiences or qualia are the essentially subjective, personal feelings or experiences that each of us have (e.g. the feeling of "redness" or "cold"), and that cannot be described by words, formulas, programs or any other objective representation. According to some consciousness theorists, such as David Chalmers, an agent without such qualia would merely be a "zombie", a creature that may behave, sense and communicate just like a human being, but that would lack the most crucial aspect of consciousness. The "hard problem" of consciousness research then consists in elucidating the nature of first-person experiences.
We believe that this approach is essential misguided. If the hypothetical zombie behaves in all respects indistinguishably from a person with consciousness, then the principle of the identity of the indistinguishables would force us to conclude that the "zombie" has consciousness. How else would we know that the people around us aren't zombies? We assume they have conscious experience similar to ourselves because they behave in all other respects similar to us. But if you would take this reasoning seriously, then you might start to get nightmarish fantasies in which you are the only real, conscious person in the world, and all the others are merely sophisticated automatons that pretend to be like you.
What we do agree with is that "first-person experience" is essentially different from "third-person experience". Every cybernetic agent complex enough to be capable of learning will develop an essentially unique experience. No language or formalism is powerful enough to capture this experience fully. Although we may have used certain formal languages to program our cybernetic agent-robot, once the robot has become capable of learning, its program will change in myriads of ways that are impossible to control or predict. If we could predict the robot's developments, this would merely mean that we have done a poor job of design, producing a creature that lacks the creativity and flexibility to adapt to really novel situations.
Moreover, even for the most simple cybernetic agents, sensations, though perhaps not unique, are intrinsically subjective or affective. Agents do not sense the world as if they were impersonal, objective bystanders, that try to internally represent the world as it is, independently of themselves. For an agent a sensation is meaningful only to the degree that it relates to the agent's goals, which, in practice, means that it is relevant to the agent's individual survival. Thus, all sensation or awareness is from the beginning subjective or "first-person": it is directly connected to the "I", the "self", and only indirectly to the world outside.
A cybernetic system is defined by its relations, both the internal relations that determine its organization, and the external relations it has with its environment. Consciousness emerges from this network of relations, and not from the "objective", material components out of which the agent is built. What matters is not whether the robot is made from flesh and blood or from silicon chips, but how the robot's different sensations, goals, memories and actions are interrelated so as to produce an autonomous agent.
Consciousness is not some mysterious substance, fluid, or property of matter, but a level of organization emerging from abstract processes and relations. People who search for consciousness in elementary particles (a form of panpsychism that has been suggested as a way to tackle the "hard problem"), because they cannot otherwise explain where the consciousness in our brain comes from, are misguided. Their intuition may be correct insofar that particles, just like any other system, should be seen as relations rather than just as clumps of matter. But to attribute consciousness to these extremely simple types of relations is merely a way to evade the really hard, but solvable, problem of reconstructing the complex cybernetic organization of the human mind in all its details and subtleties.