Cymbal: Can silicon-based machines generate consciousness? At the moment there is only philosophy

7 days ago • 2 pageviews

News on September 6, at today's Baichuan2 open source large model conference, Zhang Cymbal, academician of the Chinese Academy of Sciences and honorary dean of the Institute of Artificial Intelligence of Tsinghua University, said that whether silicon-based machines can have consciousness is currently scientifically unqualified to discuss this issue, only philosophical debates.

Zhang cymbal said that until now, the world has been confused about the theoretical working principle and the phenomenon produced by the large model, and all the conclusions have been derived to produce the "emergence phenomenon"; And the so-called emergence is to "give yourself a way out, and if the explanation is not clear, say that it is emergence", in fact, this reflects that we are not clear about it at all.

In his view, why the big model can produce a very coherent and diverse human language mainly depends on three measures; One measure is the semantic representation of the text, that is, the words in the text, including sentences and paragraphs, are all turned into vectors, which creates conditions for the construction of a continuous topological space; The second measure is the converter, the attention mechanism, the attention mechanism can guarantee the consistency of the context; The last one is the prediction of the next word; Under these three conditions, a very smooth manifold is constructed in a continuous topological space, this manifold is a local metric space, and this manifold is a conditional probability distribution of representative words in it, so when sampling in this manifold, it is human like takes.

Zhang believes that ChatGPT is different from the principle of human natural language generation, the most fundamental point is that the language generated by ChatGPT is externally driven, while humans are attention-based and driven by their own intentions; Therefore, GPT itself does not know what it is doing, and the correctness and reasonableness of the content cannot be guaranteed; If you want GPT language to be the same as human natural language, you must make the computer conscious, but "Is it possible for a silicon-based machine to be conscious?" At present, there are no conditions for scientific discussion of this issue, and there are only philosophical debates. ”

The following is the transcript of Zhang cymbal's speech:

Today I came to participate in the release of Baichuan's 7B and 13B open source models, first of all, congratulations, congratulations to Baichuan for being able to launch a high-quality open source model in just a few months since its establishment, and it can be downloaded nearly 5 million times in 3 months, which is a great achievement. It also contributes to China's large model industry.

But what I'm going to talk about today is not this, I focus on taking this opportunity to say that the company positions this open source model to help academic research, and I want to explain and express my support for such work.

As you know, up to now, our country has launched many large models of different scales from billions to tens of billions, and these large models are mainly concentrated in the application of vertical fields, and there are very few, at least not now, positioning to help academic research. I think this academic research is very important, this academic research researches what, mainly the big model itself. Why is there a need for this? This work is very urgent and very important. Because up to now, the world is confused about the theoretical working principle of the big model and the phenomena produced, and all the conclusions have been derived from the "emergence phenomenon", the so-called emergence is to give themselves a way out, and if the explanation is not clear, it is said to be emergent. Actually, this reflects our lack of clarity about it. So I think this problem must be clarified, so that we can develop a large model with Chinese characteristics unique to China.

On this issue, I will mainly talk about a few questions, a question, we must answer why such a large model can produce a very coherent, diverse human language, all human language, why OpenAI can say human like takes, all human words, not that we know what it says, this is actually very surprising, don't think it has to be so. So OpenAI is taking a big risk when doing this, in fact, it does not know whether such a large-scale text can converge to train it, and where it converges after convergence. So then it came out, and although there were a lot of things that weren't of high quality, it was amazing that it could become Humnan like takes.

How do we solve it, how do we understand it, my opinion is this, here it mainly depends on three measures. One measure is the semantic representation of the text, that is, we turn all the words in the text, including sentences and paragraphs, into vectors, not only words into vectors, but after abstraction, each of the above becomes vectors, which creates conditions for the construction of a continuous topological space. If it had been discrete, there would have been absolutely no space. The second converter, the attention mechanism, can guarantee context consistency. The prediction of the last next word. We can look at what it finally trained under these three conditions, which is to construct a manifold in the middle of a tight continuous topological space, this manifold is a local metric space, this mathematical property is very good, intuitively speaking, that is, it forms a very smooth manifold, and this manifold is a representative word in its conditional probability distribution, and its condition is preceded by the entire article, so when you sample in this manifold, it must come out of the human Like takes, it is impossible to come out with something else, because everything near it is semantically similar, even if it is a little semantic. We need to study this problem, and if we have this problem, I don't think we need so much data in the future. Because everyone also did some work later, of course, it has a threshold, you can form this manifold after you exceed this threshold, if this manifold does not form, your number is not so large, but what is this number, is it what we need now, this is a problem that needs to be studied.

The second asked about hallucinations. Is it meant to produce hallucinations? This question involves the difference between ChatGPT and human natural language generation. The most fundamental thing is that the language generated by ChatGPT is externally driven, while humans are attention-based, driven by their own intentions, so its generation is generated under the control of intentions, we are generated under external drivers, and it does not know what it is doing, so the correctness and rationality of its content cannot be guaranteed. So so before alignment, it can basically reach 60%, a large amount is unreasonable and incorrect, because the previous method, the method of forming a manifold does not guarantee that its content is correct is reasonable. So this problem can only be solved through ALignment, and now I think ALignment has done too little work domestically, not as good as foreign ones, Let's think about the reason why it can go from GPT3.5 to GPT4, there are so many changes in a few months, which is mainly the credit of ALignment, so we are a little contemptuous of ALignment, we think that just find a few people to mark it, in fact, it is completely wrong, they are the best team in the world to do this thing. As you know, OpenAI has more than 80 people who have done this, 10 people from the mainland, and 3 people in the middle have worked in our team, which is very good. So our country may not pay enough attention to this.

This involves the issue of governance and openness, because in fact, we do ALignment is to do governance, hope that it will not produce (problems), but we must know that after governance, its quality, diversity will definitely decline, that is, the more governance, its quality will definitely be affected, so there is a very important problem here, how do we balance the relationship between the two, openness and governance. Just said that the production of such results is inevitable, if you want to ask what is the biggest feature of ChatGPT, the diversification of the generation results, this is its soul, because with diversification it is possible to be creative, if you pursue diversification will inevitably produce errors, so these two are two sides of the same problem, so we must make a balance with quality when governing. So I think we should study this issue further.

I temporarily call this language GPT language, we generate a language in ChatGPT that we have never seen before called GPT language, and I think it is different from human natural language, so the fifth question here, what is the direction we will work on in the future, do we want to align GPT language completely to human natural language? Let's look at the possibilities and the necessity of it, which I think is unlikely, because for you to make GPT language exactly like human natural language, you have to solve one problem – make GPT self-aware. Just now we said that GPT is different from human language, it is externally driven, human natural language is internally driven, self-awareness driven, intention-based, if you want GPT language to reach the same language as human natural language, you must make the computer conscious. Whether it is possible for a silicon-based machine to have consciousness is currently scientifically unqualified to discuss this issue, and there is only a philosophical debate. From the philosophical argument, there are now two roads, according to the materialist behaviorist point of view, we only need to pursue behaviorist similarity, rather than the pursuit of internal mechanism consistency, artificial intelligence has this school, we are called materialist school or behaviorist school, now the vast majority of artificial intelligence takes this road, is the mainstream of artificial intelligence.

There is also a minority of artificial intelligence is coreism, he believes that only the internal system can achieve the same as human beings can achieve true intelligence, which we think is difficult to do or not do, because silicon-based machines will not be made the same as human carbon-based intelligence, this is a little philosophically unreasonable, let alone scientific, let alone scientific.

Finally, there is a necessity for it. In fact, what artificial intelligence pursues, the pursuit of machine intelligence, hope that this intelligence is different from human intelligence, it has advantages over humans in some aspects, and some aspects are superior to human beings, which is the goal we pursue, because only in this goal can we get the peaceful coexistence of humans and machines. In fact, artificial intelligence is definitely not the pursuit of being a machine like humans, and this necessity is absolutely not needed. Why should we make a machine like a human? If we need to, wouldn't it be enough for us to have a few more people, why do we have to build a machine like a human, so it is not necessary in terms of necessity, so I think it makes little sense for everyone to argue on this. So I think the most important thing at present, we need to study and understand the GPT language, only if we thoroughly understand it, we can better develop it, use it, or from the perspective of the industry can develop a healthier artificial intelligence industry. I hope Baichuan Intelligent Company can play a leading role in this regard, thank you.