Artificial intelligence (AI)

Artwork based on Krista Leesi's drawing (Stuudio 22) and Janelle Shane's neural network generated colors (tumblr).

Copeland, B. J. 2017. Artificial intelligence (AI). Britannica Academic, Encyclopædia Britannica, URL: http://academic.eb.com/levels/collegiate/article/artificial-intelligence/9711 (Accessed 7 Jun. 2017)

Artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.

Is sociality a task commonly associated with intelligent beings and do AIs and robots have to perform social tasks? Looking at Siri, the Amazon analogue and various developments in Japanese robotics, the answer appears as an overwhelming yes. Not to mention the various theoretical and software-based formulations of social or collaborative AIs and the long-standing study of human-computer interaction.

Tehisintellekti möödupuuks on siin inimmõistus, või vähemasti intelligentsed olendid. Üks asi, mida "täielikest" (full) tehisintellektidest oodatakse on inim-väärne sotsiaalsus, mis peaks siin kuuluma intelligentsete olenditega harjumuspäraselt seotud tegevuste hulka. Arutleda võib intelligentsusekäsitluste üle, milles sotsiaalne, emotsionaalne, jne intelligentsus on kaasa arvatud, või ka intelligentsuse ja sotsiaalsuse omavaheliste ühenduslülide üle (nt Gregory Batesoni metaorganism).

The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience.

What about intellectual processes not characteristic of humans, i.e. superhuman capabilities?

These abilities characteristic of humans seem characteristic of other semiotic subjects as well, and concerns the semiotic threshold. That is to say, the abilities to reason, discover meaning, generalize, and learn from past experiences are all concerns of semiotics and even of rational psychology of Peirce's contemporaries.

Inimestele iseloomulike intellektuaalsete protsessidega äivestatud (endow - võimalikuks tegema; äivestama - Aavikul, võimeliseks tegema) süsteeme alles arendatakse. Sageli kaasatakse sellesse projekti sellised võimed nagu arutlemine, tähenduse avastamine, üldistamine ja läbielatud kogemustest õppimine. Need tunnused on omased inimestele ja teistele elusolenditele, vähemalt biosemiootilisest perspektiivist, sest need protsessid on seotud maailma mudeldamise ja selles opereerimisega. Küsitavaks muutub inimvõimeliste tehisintellektide arendamise juures inimvõimete piiratus ja näiteks selline paradoks või umbtee, et kui üks inimest iseloomustavaid omadusi on inim-ülesuse poole püüdlemine (spirituaalsus, vaimu harimine, võimete ja oskuste edendamine parimaist parimaks), siis kas pole mitte vältimatu, et see tung olla rohkem nakkab ka inimmõistuslikele tehisintellektidele? Lõpp-probleemiks siin on oma baaskoodi ümberkirjutamise probleem, ehk A++ järku tehisintellekti küsimus.

Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks - as, for example, discovering proofs for mathematical theorems or playing chess - with great proficiency.

Computers can be programmed → algorythms → expert systems.

The complexity of tasks a computer can perform is a topic unto its own.

Arvutid saavad hakkama ülesannetega, mis on inimmõistuse jaoks rasked või võimatud, aga nende ülesannete keerukusel ilmneb ebasümmeetria maailmade mudeldamises: arvuti saab paremini hakkama suure andmemahuga, aga inimene saab vististi hakkama rohkemate ja keerulisemate andmetüüpidega. Informatsiooni töötlemise viisid on ju materiaalselt erinevad.

Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge.

Human flexibility → plasticity | fuzzyness, everyday chaos, the terrible Firstness of Being.

Tasks requiring a lot of human heuristics. From somewhere below:

In principle, a chess-playing computer could play by searching exhaustively through all the available moves, but in practice this is impossible because it would involve examining an astronomically large number of moves. Heuristics are necessary to guide a narrower, more discriminative search.

Arvutiprogrammid ei saagi vist ületada inimmõistuse paindlikkust, sest programmil on piiratud üksus; operatsioonisüsteem see-eest hõlmab paljusid samaaegselt jooksutatavaid programme ja näiteks Her-is pandi tema operatsioonisüsteemi rolli, kuigi isikliku arvuti suutlikkus tehisintellekti omal jõul jooksutada tundub praegusel hetkel veel ulmeline (filmis võis ta siiski olla pilvepõhine).

On the other hand, some programs have attained the performance levels of human experts and professionals in performing certain specific tasks, so that artificial intelligence in this limited sense is found in applications as diverse as medical diagnosis, computer search engines, and voice or handwriting recognition.

In the limited sense of expert systems? (i.e. computer programs that discover proofs for mathematical theorems or play chess with great proficiency)

Should look into compiling an overview of areas where artificial intelligence technologies are currently applied, various subdivisions thereof. Medicine, online searching, and speech recognition are mentioned here, but the list should most definitely include law, geometry, transportation, industrial design, etc.

Mõned programmid on võimelised teostama inimekspertide ja professionaalide tasemel ülesandeid, eriti selliseid, mis sisaldavad suurte andmemasside töötlemist, näiteks tuhandete seaduste hulgast oluliste üles leidmine, lugematute haiguslugude põhjal sümptomite diagnoosimine, jne. Neist rakendusvaldkondadest võiks moodustada esialgse ülevaate kui artikleid lugema hakkan.

All but the simplest human behaviour is ascribed to intelligence, while even the most complicated insect behaviour is never taken as an indication of intelligence. What is the difference? Consider the behaviour of the digger wasp, Sphex ichneumoneus. When the female wasp returns to her burrow with food, she first deposits it on the threshold, checks for intruders insider her burrow, and only then, if the coast is clear, carries her food inside.

Never say never:

Insect intelligence is an under-studied field, but a particularly weird and dynamic one where huge discoveries are being made almost every year. The biggest problem with asking about animal intelligence is defining what we even mean by “intelligence.” The animals generally thought of as smartest—among them the great apes, dolphins, and the octopus—are believed to be intelligent because they demonstrate some of the behaviors that we associate with our own superiority as humans. These qualities include problem solving, advanced communication, social skills, adaptability, and memory, and also physical traits like the comparative size of the brain or number of neurons in the brain. (Source)

In the above example of the digger wasp "demonstrate[s] some of the behaviors that we associate with our own superiority as humans": the female wasp is described as behaving like paranoid human housewives who leave the grocery bags at the door to check why the front door is open.

In addition to reasoning, discovering meaning, generalizing, and learning from past experience, the improved list of human qualities (or human-like qualities projected on other beings) now includes problem solving, advanced communication, social skills, adaptability, and neurological complexity.

Inimeste juures omistatakse kõik peale kõige lihtsama käitumise intelligentsusele, aga isegi kõige keerulisemat putukate käitumist ei peeta intelligentsusele osutavaks. Siin on võib-olla koheselt selge, et intelligentsus on nö kõrgema järgu kirjeldis (descriptor), st ka inimeste hulgas eristame ju vähem- ja rohkem-intelligentseid inimesi ja eraldi "intelligentsiat"; mõiste on sisemiselt hinnanguline. Teisi olendeid hindame oma liigi mõõdupuu järgi, mistõttu putukad ei kvalifitseeru. Võib-olla on siin ka etümoloogiline pärand, sest intelligiibsus on ju arusaadavus, ja intelligentsemate olendite juures on märgata nt fenomenoloogilist intersubjektiivsust või mõistusteooriat (ToM). St valdkondlikust kallutatusest tundub enda jaoks advanced communication olulisim. Nimekiri inimestele iseloomulikest tunnustest täieneb: probleemide lahendamine, edasiarenenud suhtlemisoskus, ühiskondlikud oskused, kohastumus ja ajusuurus ja närvisüsteemi keerukus.

The real nature of the wasp's instinctual behaviour is revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence - conspicuously absent in the case of Sphex - must include the ability to adapt to new circumstances.

It is beginning to look as if the instinct/sentience debate from 1880s is nearly unavoidable in artificial intelligence discourse. I might as well go along with it and include those guys (discussing curious sentiments).

The relation between intelligence and adaptability smacks of early evolutionary psychology, i.e. habituation, plasticity, and all that good stuff.

Vapsiku (wasp) sellist käitumist ei saa pidada intelligentseks, sest ta ei kohane uute tingimustega vaid näib jooksutavat evolutsioonilist ellujäämis- või ettevaatusprogrammi, st toimimas instinktiivselt. Analoogias sellise vaatega kohastumisvõimest (mille keskmes on tähelepanu ja mingi puuduva elemendi tähtsus) saaks vististi rääkida sotsiaalsest-faatilisest funktsioonist, eriti sekkumise või segavate faktoritega (interference) seoses.

Psychologists generally do not characterize human intelligence by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.

Luckily for me, these components of intelligence are nearly identical to the ones discussed by early rational psychologists and my favourite philosopher, E. R. Clay. Only thing that could be missing is problem solving, though this may be subsumed under ratiocination or something else. Basically, these were all considered either faculties, capabilities or instincts of the (human) mind.

Apparently, Lotman had discussed this exact thing in the 1970s, arguing against limited conceptions of the intellect and narrow formulations of creativity as a special class of problem-solving. This "combination of many diverse abilities" remains ambiguous, on the other hand, as even here the list is a hodgepodge of psychologese, giving off the impression of free selection (cf. how the 19th century schoolbook psychologists structured their discourse: reasoning, perception, language, etc. constituted neatly discreet chapters.

Learning seems to have been one of the first components to be directly studied in relation with AI (cf. about Turing below). Reasoning probably came along with analytical philosophers, who formalized logic in the 1940s and 50s. Problem solving sounds like 1960s and 70s (and Google ngram affirms this hunch - it rises suddenly in early 1950s and reaches its peak in the following decades). Perception is a philosophical universal, though machine perception is novel.

There are a number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution.

This is leaning from past experience. In semiotic parlance, it probably amounts to something like forming a sign or a symbol, based on logical arguments about which chess moves might win the round. Since I'm not at home in logic, I'd leave this discussion and perhaps chess altogether out of my own discourse. In short: randomized trial-error and memory.

Aren't the different AI approaches currently utilized (semantic web, artificial neural networks, deep learning, etc.) all just different forms of automatized learning? It's beginning to make more and more sense that Joe should be so taken with Bateson's contextual learning theory (particularly due to the affinity between artificial intelligence and cybernetics).

In more abstract terms, the learning AI presents a sort of epistemological threat in weird kin with the folk-psychological fear of intellectuals. The scene I have in my mind, specifically, is one found in a forgotten sci-fi movie when humans trapped in a virtual world discover that the AI they are battling can learn! Unlike unlearning bots that you can defeat simply by learning their routines, a bot that learns your own habitual moves, style of playing, etc. presents a much more difficult task - now you have to defeat a nega-version of your own learned routines, in effect defeat yourself before you can defeat the bot.

This simple memorizing of individual items and procedures - known as rote learning - is relatively easy to implement on a computer. More challenging is the problem of imprementing what is called generalization. Generalizing involves applying past experience to analogous new situations.

Exactly where I went wrong in my semiotic interpretation: this rote learning is closer to indexicalization, i.e. forming tokens (individual items and procedures, though the latter would come closer to an argument or legisign). Generalizing, in this limited sense, would amount to cue reduction, i.e. pruning exformation (chess moves or routines that don't work) from information (those that do and thus become "significant").

The problem for me here is the same I experienced between Peirce and Clay: one's categorical typologies don't correspond very neatly to the processual thinking of Clay, whose system is a bit less systemic, but very poignant in matters of generalization, unification, and other similar "cognitive" activities (in point of example, Clay has defined something like the trivialization in cognitive dissonance theory, which could make for a fine pair with the exformation strain of thinking).

Checking up on Clay, the search yields following terminology: general synthesis, caused by latent operation of instances on the mind - does this not sound like the formation of legisigns (general syntheses) from tokens (instances)? - is defined as "the mental act which generates a beginning of knowledge, whether conscious or unconscious"; general data, which consists of general theses ("Things exual to the same are equal to one another" is general while "It reans" is particular data), i.e. the results of generalization; general name, like "custom" (and "culture"), etc. "The general synthesis of the burned child" probably does not require explanation.

For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously has been presented with jumped, whereas a program that is able to generalize can learn the "add ed" rule and so form the past tense of jump based on experience with similar verbs.

The genesis of linguistic rules involves generalization. This point immediately calls to mind the recent reddit thread about why languages in English are named the way the are, -ish for some, -ian for others, etc. - which could serve as a secondary example of this kind of rule-formation in human socio-linguistic history.

This latter point is effectively what Lotman's cultural typologies were about - formalizing intercultural differences with the aid of crypto-cybernetic (Russian formalist, organicist, dynamic) literary and aesthetic theory. The very point of Lotman's cultural semiotic work on artificial intelligence theory was recently summarized in the need for artificial systems to be trans-cultural (strong AI should be strong enough to understand all human languages - Google translate go fluent - and able to discern intercultural differences and act appropriately to delicate human sensibilities (i.e. "A robot may not injure a human being or, through inaction, allow a human being to come to harm.").

The cultural heuristics condition thus states that a successful strong AI should be able to understand not only natural language utterances, but also human history, myths, ideologies, art, and even memes - how legit could a strong AI be if it couldn't make out the current financial state of the Pepe market?

To reason is to draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. An example of the former is, "Fred must be in either the museum or the café. He is not in the café; therefore he is in the museum," and of the latter, "Previous accidents of this sort were caused by instrument failure; therefore this accident was caused by instrument failure."

What about abduction? There's so much literature on Peircean abduction for artificial intelligence that it would be remiss not to include the third route of inference.

I'm pretty sure that if we'd look up older definitions of reason, it would not include such an apparent contextualist assumption about appropriateness to the situation, which almost sounds like "goodthinkful".

Interestingly, Clay's chapter on Reason begins with a definition of probability, then opinion, belief, doubt, etc. which seems to amount to a theory of the truth-probabilities of opinions. Among other things what stands out is the distinction between practical and non-practical reasons, the latter leaving the matter of appropriateness to the situation hanging in air - why should reasoning, if it is not involved with acting in a situation, include appropriateness to the situation? What, after all, is the situation of reasoning? Is it the social context as a whole or some portion of it (i.e. ideologies, geopolitical conditions) or what?

The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion, whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance.

This made me realize that Clay's certive and uncertive forms of knowledge could pertain to induction and deduction, though this leaves abduction hanging and I'm not competent to solve the quagmire.

Reading up on the certive and non-certive forms of knowledge, it would appear that the point of the distinction is to make room for forms of knowledge which do not suppose certitude, i.e. knowledge not as "true justified beliefs" but as "justified beliefs" only, and even then what's to stop anyone from discarding justification, too? This amounts to yet another empty para-epistemological problem: what if in the process of teaching AI human culture it "goes mad", as Lotman proposed, and discarded the condition of certitude for knowledge - or what if it is simply incapably of a so-called "reality check"?

"Knowledge does not suppose the truth of what is known. If it did, man would be infallible. There is a false as well as a true knowledge." says Clay, but what if man constructs an infallible machine? What will become of false knowledge, fake news, and universes of discourse pocketed away from verifiability?

There has been considerable success in programming computers to draw inferences, especially deductive inferences. However, true reasoning involves more than just drawing inferences; it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the hardest problems confronting AI.

Once again, it sounds like what is missing is abduction, though it's surely no panacea. The problem of relevance is one neatly close to semiotics, especially in the guise of significance. But it also permeates various semiotic theories, most familiar to me being Bühler-Mukarovsky-Jakobson's, in which the dominant semiotic function is the relevant modus operandi, the functional focus of semiosis.

The cultural heuristics condition, if we were to engage in some "cognitive modelling", would be involved in the evaluation of relevance, particularly in social tasks including human-machine interactions.

On a philosophical note, this talk of relevance once again calls up Aristotle's teleology. So:

If then in what we do there be some end which we wish for its own account, choosing all the others as means to this, but not every end without exception as a means to something else (for so we should go on ad infinitum, and desire would be left void and objectless), - this evidently will be the good or the best of all things. And surely from a practical point of view it much concerns us to know this good; for then, like archers shooting at a definite mark, we shall be more likely to attain what we want. If this be so, we must try to indicate roughly what it is, and first of all to which of the arts or sciences it belongs. It would seem to belong to the supreme art or science, that one which most of all deserves the name of master-art or master-science. Now Politics seems to answer to this description. For it prescribes which of the sciences the state needs, and which each man shall study, and up to what point; and to it we see subordinated even the highest arts, such as economy, rhetoric, and the art of war. Since then it makes use of the other practical sciences, and since it further ordains what men are to do and from what to refrain, its end must include the ends of the others, and must be the proper good of man. For though this good is the same for the individual and the state, yet the good of the state seems a grander and more perfect thing both to attain and to secure; and glad as one would be to do this service for a single individual, to do it for a people and for a number of states is nobler and more divine. This then is the present inquiry, which is a sort of political inquiry. (NE 1094a2)

By way of arbitrary "reading into", I now hold that the connection between artificial intelligence and politics (in the narrower modern sense) seems tantalizing. I should most definitely look to applications in politics, the democratic process, and legislation. But a more true-to-form reading of this passage would suggest that the ultimate goal of AI is to manage human conduct of life, i.e. become our custodians as in I, Robot. This appears to be suggested even by the final sentence of Turing's "Intelligent Machinery" (1948): "At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon." It appears that in CHAPTER XXIII: THE BOOK OF THE MACHINES, Butler "write[s] about the possibility that machines might develop consciousness by Darwinian Selection" (Wiki) - basically, "A Clockwork Origin" (Futuram S06E09).

Problem solving, particularly in artificial intelligence, may be characterized as a systematic research through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general purpose method is applicable to a wide variety of problems.

This sounds like the abstract and concrete distinction applied on problem solving: either solving a special-purpose part or a general-purpose whole. The analogy probably carries over to micro- and macro-world modelling.

After reading about Lotman's critique of viewing creativity as a form of unpredictable problem-solving, it does seem like a problem faces the lack of predefined goals and solutions in many lines of creative work. But then again, there's so much theory of AI planning, someone has most definitely already treated this in great detail.

One general-purpose technique used in AI is means-end analysis - a step-by-step, or incremental, reduction of the difference between the current state and the final goal. The problem selects actions from a list of means - in the case of a simple robot this might consist of PICKUP, PUTDOWN, MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT - until the goal is reached.

Writes Aristotle:

Every art and every kind of inquiry, and likewise every act and purpose, seems to aim at some good: and so it has been well said that the good is that at which everything aims. But a difference is observable among these aims or ends. What is aimed at is sometimes the exercise of a faculty, sometimes a certain result beyond that exercise. And where there is an end beyond the act, there the result is better than the exercise of the faculty. Now since there are many kinds of actions and many arts and sciences, it follows that there are many ends also; e.g. health is the end of medicine, ships of shipbuilding, victory of the art of war, and wealth of the economy. But when several of these are subordinated to some one art or science, - as the making of bridles and other trappings to the art of horsemanship, and this in turn, along with all else that the soldier does, to the art of war, and so on, - then the end fo the master-art is always more desired than the ends of the subordinate arts, since these are pursued for its sake. And this is equally true whether the end in view be the mere exercise of faculty or something beyond that, as in the above instances. (NE 1094a1)

That is to say, the special-purpose method are as if the ends of the subordinate arts, and the general-purpose method is like the end of the master-art - like all the individual contributions vs the general war-effort (or alternatively, managing human conduct of life).

Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating "virtual objects" in a computer-generated world.

All of these problems are "dumb" in the etymological sense of the word because they don't involve language and advanced communication skills (i.e. they are the workings of a mute calculator).

In perception the environment is scanned by means of various sensory organs, real or artificial, and the scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination in the scene, and how much the object contrasts with the surrounding field.

One of the problems I've moled over in this regard is the absence of embodiment. Barring vision, humans are still embodied in an environment through all the other senses, but the artificial brain in a tin-suit vat may have all of its sensory data corrupted. Though, now that I think about it, this also occurs in humans in loss of consciousness, sensory overload, altered or abnormal states of consciousness, etc.

At present, artificial perception is sufficiently well advanced to enable optical sensors to identify individuals, autonomous vehicles to drive at moderate speeds on the open road, and robots to roam through buildings collecting empty soda cans.

I was sure the latter example was going to involve the Amazon robots that drive around the endless store-house floors, carrying goods.

A language is a system of signs having meaning by convention. In this sense, language need not be confined to the spoken word. Traffic signs, for example, form a minilanguage, it being a matter of convention that {hazard symbol} means "hazard ahead" in some countries. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called natural meaning, exemplified by statements such as "Those clouds mean rain" and "The fall in pressure means the valve is malfunctioning."

This textbook phraseology made me wonder what could happen if the etymological meaning of "system" is emphasized in this formulation. The Greek sustēma, from sun/syn- 'with'/'together' and histanai 'set up'/'cause to stand, make or be firm', yielded the active synistanai, "to place together, organize, form in order" and the passive systema, "organized whole, a whole compounded of parts" - an arrangement, especially of a totality such as the whole creation or the universe, or, for example, the "animal body as an organized whole, sum of the vital processes in an organism", and, in philosophy, a "set of correlated principles, facts, ideas, etc." It would seem to emphasizing the aspect of conventionality on its own, as anything "put together", is put together by someone, and presumably in an intentional arrangement. If language is a system of signs then it means that language is a collection of standing conventions. Now the question arises: how is it put together, and by whom? Is it humanity or the language-family as a whole (macro-perspective) or the individual at every instance of use (micro-perspective)?

An important characteristic of full-fledged human languages - in contrast to birdcalls and traffic signs - is their productivity. A productive language can formulate an unlimited variety of sentences

This paragraph-sentence should ideally be woven into the fabric of a concise definition of language because, along with conventionality, this - capacity for novel meaningful combinations, or something to that effect - is part of Charles Morris's five conditions of a comsystemic language. (Now that I think about it, perhaps it is time to re-read his notorious book, since his bio-behaviouristic semiotics should ideally encompass transhuman semiotic subjects just as easily as it does other living organisms (it seems safe to say that Morris's semiotic was proto-zoosemiotic, influencing Thomas Sebeok).

It is relatively easy to write computer programs that seem able, in severely restricted contexts, to respond fluently in a human language to questions and statements. Although none of these programs actually understands language, they may, in principle, reach the point where their command of a language is indistinguishable from that of a normal human.

Restricted contexts → micro-worlds → virtual reality simulations.

Responding fluently to questions and statements does not advanced social and communication skills make.

What is "understanding language" anyway? My own arbitrary condition would be the ability to formulate new and sensible (senseful) words, expressions, and corresponding meanings and concepts. But I have to acknowledge that it is very easy to go further into what it means to be a language-user, a full-fledged one, a language-creator, etc. This problematic is very neat in the "meaning generation" discourse of cognitive semioticians and various other pragmatically oriented approaches. My point is simply this: if it is within the linguistic qualifications of a normal human to make up new words on the spot, on the fly, etc. then an AI indistinguishable from humans in this regard should be able to do the same. But won't this throw the AI into the terrible chaos of Firstness?

What, then, is involved in genuine understanding, if even a computer that uses language like a native human speaker is not acknowledged to understand? There is no universally agreed upon answer to this difficult question. According to one theory, whether or not one understands depends not only on one's behaviour but also on one's history: in order to be said to understand, one must have learned that the language have been trained to take one's place in the linguistic community by means of interaction with other language users.

Define genuine understanding. Doesn't "genuine" appear in one of Peirce's definitions of the final interpretant?

Can a computer ever really use language like a native human speaker? There wouldn't be a cultural accent, there'd be an ontological accent, with a cultural accent on top (depending on the AI, as corporate and national cultures are sure to influence AI development in profound ways).

AI research follows two distinct, and to some extent competing, methods, the symbolic (or "top-down") approach, and the connectionist (or "bottom-up") approach. The top-downa approach seeks to replicate intelligence by analyzing cognition independent fo the biological structure of the brain, in terms of the processing of symbols - whence the symbolic label. The bottom-up approach, on the other hand, involves creating artificial neural networks in imitation of the brain's structure - whence the connectionist label.

Ah, some much needed clearness to these distinctions. Now the Dreyfus dispute makes some more sense. It should be interesting to see which camp is preferred by authors making more extensive use of semiotics. Some papers I've seen are most definitely symbolic (top-down) and involve an interpretation of the Peircean model akin to that of Snow (in the 1940s, cf. rea-real interpretation and the case of stacking bricks, in analogy with robotic studies following FREDDIE and could very well be called "pincer automation").

To illustrate the difference between these approaches, consider the task of building a system, equipped with an optical scanner, that recognizes the letters of the alphabet. A bottom-up approach typically involves training an artificial neural network by presenting letters to it one by one, gradually improving performance by "tuning" the network. (Tuning adjusts the responsiveness of different neural pathways to different stimuli.)

What is scary about this is the neurological lingo in place of where programmers speak of modulation or something technical, definitely not "neural pathways" and "stimuli".

In contrast, a top-down approach typically involves writing a computer program that compares each letter with geometric descriptions. Simply put, neural activities are the basis of the bottom-up approach, while symbolic descriptions are the basis of the top-down approach.

On the basis of this description, I would ascribe general purpose methods to bottom-up (connectionist) approaches and specific purpose methods to top-down (symbolic) approaches. Hopefully I'll see how this pans out or why these distinction's don't match.

In The Fundamentals of Learning (1932), Edward Thorndike, a psychologist at Columbia University, New York City, first suggested that human learning consists of some unknown property of connections between neurons in the brain. In The Organization of Behavior (1940), Donald Hebb, a psychologist at McGill University, Montreal, Canada, suggested that learning specifically involves strengthening certain patterns of neural activity by increasing the probability (weight) of induced neuron firing between the associated connections.

Can't find neither book online but the titles are familiar enough from introductiory psychology texts. Did not know that the learning theory of neural connections was so new, though - would have guessed that William James made that connection (though, to be fair, Thorndike studied under James).

In 1957 two vigorous advocates of symbolic AI - Allen Newell, a researcher at the RAND Corporation, Santa Monica, California, and Herbert Simon, a psychologist and computer scientist at Carnegie Mellon University, Pittsburgh, Pennsylvania - summed up the top-down approach in what they called the physical symbol system hypothesis. This hypothesis states that processing structures of symbols is sufficient, in principle, to produce artificial intelligence in a digital computer and that, moreover, human intelligence is the result of the same type of symbolic manipulation.

The latter statement about symbolic manipulation would result logically from the symbolic interactionism of the Chicago School (the connection between Bateson and neurolinguistic programming forces itself upon the mind). The physical symbol system hypothesis sounds, when put like that, analogous to physicalism in analytical philosophy. Could use a review for elucidating interdisciplinary parallels.

During the 1950s and '60s the top-down and bottom-up approaches were pursued simultaneously, and both achieved noteworthy, if limited, results. During the 1970s, however, bottom-up AI was neglected, and it was not until the 1980s that this approach again became prominent. Nowadays both approaches are followed, and both are acknowledged as facing difficulties. Symbolic techniques work in simplified realms but typically break down when confronted with the real world; meanwhile, bottom-up researchers have been unable to replicate the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurrons whose pattern of interconnection is perfectly known. Yet connectionist models have failed to mimic even this worm. Evidently, the neurons of connectionist theory are gross oversimplifications of the real thing.

The neck bone's connected to the head bone, but what do you connect virtual neurons with?

Employing the methods outlined above, AI research attempts to reach one of three goals: strong AI, applied AI, or cognitive simulation. Strong AI aims to build machines that think. (The term strong AI was introduced for this category of research in 1980 by the philosopher John Searle of the University of California at Berkeley.) The ultimate ambition of strong AI is to produce a machine whos overall intellectual ability is indistinguishable from that of a human being. As is described in the section Early milestones in AI, this goal generated great interest in the 1950s and '60s, but such optimism has given way to an appreciation of the extreme difficulties involved. To date, progress has been meagre. Some critics doubt whether research will produce even a system with the overall intellectual ability of an ant in the foreseeable future. Indeed, some researchers working in AI's other two branches view strong AI as not worth pursuing.

Strong AI is what I call, in Estonian approximation, human-equivalent-intelligence. Strong AI may remain a science-fiction fantasy but I doubt humans stop creating human-like effigies, and for all intents and purposes, androids are already in existence. It is not yet out of the question that technology will catch up to artonic dreams.

In Juri Lotman's semiotics of art (a subset of cultural semiotics), the work of art is viewed as an artificial "thinking device" (with a certain "special purpose method" of its own, be it even as simple as evincing a feeling) as part of culture - a naturally and historically evolved mechanism of the collective mind with a shared data pool and distributed information processing and actualization. Artonics, or the cybernetics of art is a proposed field of scientific inquiry into the ways art operates in human cultures, how it condenses collective memory traces and reflects the experiential worlds of "genuine" intelligences. Whether this would be benificial to AI research, I don't know. Here's something on the topic:

According to Lotman, culture is a feature that distinguishes our species not only from other animals but also from other forms of intelligence and especially from those that are assigned to machines. As mentioned in chapter I, in the 1960s-70s, cybernetics permeated almost all levels of Soviet academia, having positioned itself as the universal science. In the context of "the scientific and technical revolution," the problem of artificial intelligence naturally became one of the dominant themes; after all, the cybernetic viewpoint practically eliminated the essential boundary between man and the machine As Norbert Wiener and his colleagues state in a 1943 article, "A uniform behavioristic analysis is applicable to both machines and living organisms, regardless of the complexity of the behavior" (Rosenblueth, Wiener, and Bigelow 1943: 22). so according to the teleological point of view - in cybernetics, teleology is understood as purposeful behavior controlled by feedback - humans, animals, and machines are functionally identical to each other and therefore can be studied by the same methods.

Since feedback-related conceptions (channel, central processing) are my favourites, I'll just note that this conception of teleology appeals to me due to promise of developing autocommunication, but most likely it amounts to mere self-regulation in guidance systems, which translate into self-communication in Morris's semiotics as short-term memory auxiliaries.

Applied AI, also known as advanced information processing, aims to produce commercially viable "smart" systems - for example, "expert" medical diagnosis systems and stock-trading systems.

Medical and financial. I'm sure there are meta-reviews of applied AI results out there - I should search them out and compose a page for this blog (or some other). This could be useful for the open letter - one of the first practical goals would be to find out if there are emulable applications in tourism industry (for "Eston").

In cognitive simulation, computers are used to test theories about how the human mind works - for example, theories about how people recognize faces or recall memories. Cognitive simulation is already a powerful tool in both neuroscience and cognitive psychology.

I should definitely look this up because this is the first I'm hearing of cognitive simulations (or the term has simply passed be by, unanchored).

The earliest substantial work in the field of artificial intelligence was done in the mid-20th century by the British logician and computer pioneer Alan Mathison Turing. In 1935 Turing described an abstract computing machine consisting of a limitless memory and a scanner that moves back and forth through the memory, symbol by symbol, reading what it finds and writing further symbols. The actions of the scanner are dictated by a program of instructions that also is stored in the memory in the form of symbols. This is Turing's stored-program concept, and implicit in it is the possibility of the machine operating on, and so modifying or improving, its own program. Turing's conception is now known simply as the universal Turing machine. All modern computers are in essence universal Turing machines.

The last sentence in this passage summarizes the impression from the very first. It amounts to something like the operating system re-writing its own kernel, the bootstrip, or whatever else.