At this point I feel like some crazy person talking to themselves when I say this, but "AI" is a research field that includes many more sub-fields than machine learning, let alone deep learning, and it's impossible for someone to be an "AI Expert" while ignoring this, even if they think they are because they ignore it. I appreciate that synecdoche is a thing, but to recommend a "roadmap" to becoming an "AI Expert in 2020" while ignoring most of AI is...
... well I don't know what it is, anymore. Sign of the times? Perhaps the field (AI that is) should abandon its name just to be sure that if a backlash ever comes against deep learning it won't take everyone else's reseach with it?
On the other hand, I feel a bit like the horses and the cows in Animal Farm, when the pigs took over the Revolution. Not that there was any revolution, not really, but the way that the research trends have shifted lately, from what would make good science to what will make you hired by Google, is a little bit of a shock to me. And I stared my PhD just three years ago. It's come to the point where I don't want to associate my research with "machine learning" and I don't want to use the term in my thesis, for fear of the negative connotations (of sleazy practices and shoddy research) that I am concerned might be attached to it in the time to come.
And it's such a shame because the people who really advanced the field, people like Joshua Bengio, Jurgen Schmidhuber, Geoff Hinton, Yan Le Cunn etc are formidable scientists, dedicated to their preferred approaches and with the patience to nurture their ideas against all opposition. The field they helped progress so much deserved better.
GOFAI didn't work. ML is the only AI approach that ever did. Sure, your tree searches and your planners and your grammars and your knowledge graphs and your expert systems still have their places... but they're not AI, and they never will be.
Amicably, can I ask you how you know what you say above to be true? Where does your knowledge of AI come from?
Edit: I'm asking because there is evident in your comment a confusion that is all too common on the internets today, of thinking of "GOFAI" (symbolic AI) and "ML" (machine learning) as two somehow incompatible and perhaps exclusive sub-fields of AI. This is as far from the truth as it could be, for instance machine learning really took off as a subject of research in the 1980's with the work of Ryszard Michalski and others, who considered machine learning approaches for the purpose of overcoming the "knowledge acquisition bottleneck", i.e. the difficulty of hand-crafting production rules for expert systems. Indeed, most of the early machine learning systems were propositional-logic based, i.e. symbolic. And of course, one of the most widely used and well-known type of machine learning approach used to this day, decision trees, hail from that time and also learn propositional logic, symbolic models.
Of course, most people today know "ML" as a byword for deep learning, or at best statistical pattern recognition (if that). It's just another aberration brought on by the sudden explosion of interest in a once small field.
I refer you to Michalski's textbook, "Machine Learning: An Artificial Intelligence Approach" and Tom Mitchell's "Machine Learning" for more information on the early days of the field, i.e. until ca. 2000.
It doesn't much matter what people wrongly thought in the past, any more than one has to study the four fundamental elements of water, air, earth and fire to become an expert in chemistry. GOFAI didn't work, hence the winter. If it did I'd think otherwise.
It's true that you can, in some places, merge the two. The significant majority of the time, this just makes your ML system worse in the long run, per The Bitter Lesson. Occasionally—very occasionally—your problem is fundamentally simple, so something brittle like AlphaZero works, even though we'd still rather shave the fixed-function parts off with MuZero and the like. But it's no coincidence this is reserved for simple, thoughtless problems (like brute-force move search) and kept isolated from the thinking, as intelligence needs generalization and abstraction, and GOFAI doesn't generalize or abstract.
> And of course, one of the most widely used and well-known type of machine learning approach used to this day, decision trees
Though decision trees live a healthy life in data analysis, alongside things like k-means clustering, they're obviously not AI.
> Of course, most people today know "ML" as a byword for deep learning, or at best statistical pattern recognition (if that).
‘Statistical pattern recognition’ is just name calling devoid of real criticisms. I can ask GPT-3,
> a = ["fitness", "health", "heart"], b = ["lifting", "curls", "squats"], c = ["running", "jogging"], so what is b.append("pushups")?
and it'll happily answer
> b.append("pushups") returns ["lifting", "curls", "squats", "pushups"]
or I can give it the Loebner Prize questions which it can almost ace, or I can test it on 10-digit addition and it gets 60% right (with thousands separators, but still suffering BPEs), or I can ask it to do quirky tasks like shuffling letters, or ask it to write sentences with novel words and find that it understands nuanced differences between the definitions...
You can scream ‘but it's just statistical pattern recognition’ all you want, but the criticism doesn't mean anything if ‘statistical pattern recognition’ includes this level of generalization, reasoning and algorithmic sophistication in natural language with scaling curves that assure us the best is yet to come.
I "can scream all I want"?
Before I continue this conversation, I'd like to ask, do you think this is an appropriate response to my comment? I responded to your original comment respectfully and politely.
Note that I did not use "pattern recognition" in a derogatory manner, neither did I say that anything is "just" pattern recognition. I think you may be mistaking my comment for a different opinion that you disagree with.
While waiting for Veedrac's reply to my comment, I thought I'd clarify here what I meant by "statistical pattern recognition". "Pattern recognition" is the name of the sub-field of AI research from which statistical machine learning grew into what it is today, for example many problems in machine vision are typically considered as pattern recognition problems, etc. "Statistical" refers to the methods used to achieve the task, e.g. neural networks are normally filed under "statistical AI" (for historical reasons).
Like I say in my previous comment, modern machine learning research started as a discipline that was separate from pattern recognition (and Pattern Recognition was sometimes considered distinct to AI, as a research subject). I quote below from Tom Mitchell's wildly influential paper, "Generalisation as Search" (AI 18, 1982):
"5.2. Statistical pattern recognition
The field of statistical pattern recognition deals with one important subclass of generalization problems. In this subclass, the instances are represented by points in n-space, and the generalizations are represented by decision surfaces in n-space (e.g. hyperplanes, polynomials of specified degree). The matching predicate corresponds to determining whether a given point (instance) lies on one side or another of a given decision surface (generalization). The field of Statistical Pattern Recognition has developed very good generalization methods for particular classes of decision surfaces. Many of these methods are relatively insensitive to errors in the data and some have well understood statistical convergence properties, under certain assumptions about the probability distribution of input instances."
"In contrast to work in Statistical Pattern Recognition, work on the generalization problem within Artificial Intelligence has focused on problems involving a different class of instance and generalization languages. These languages are incompatible with numerically oriented representations that describe objects as feature vectors in n-space. For example, Winston's program [21] for learning descriptions of simple block structures such as arches and towers, represents instance block structures in terms of their component blocks and relationships among these. In this domain the natural representation for instances is a generalized graph rather than a feature vector. (...)
https://www.sciencedirect.com/science/article/abs/pii/000437...
Pattern recognition these days is dominated by neural methods, e.g. CNNs for object classification etc. and so is machine learning so it kind of makes sense that the three terms are used interchangeably, but unfortunately many people are not aware of the historical context of the terms, hence misunderstandings as the one by Veedrac above, regarding my comment that machine learning has become a byword for statistical pattern recognition: it's not because I'm dismissive of statistical pattern recognition, or of the ability of deep learning systems to perform it; it's because the terms have really become interchangeable and I think even among researchers.
In any case, before commenting on a complex subject of research with a long history, my recommendation remains to first become well acquainted with the subject and its history. Otherwise, one runs the risk of appearing confused.
Edit: Note that Mitchell's use of "generalisation" in the excerpt above does not refer to the ability of a model to generalise to test, or unseen, data. Rather, "generalisation" in Mitchell's paper refers to the ability of a system "to take into account a large number of specific observations, then to extract and retain the important common features that characterize classes of these observations." Mitchell's paper describes a theory of search guided by generalisation relations between hypotheses and observations.
> Note that I did not use "pattern recognition" in a derogatory manner, neither did I say that anything is "just" pattern recognition. I think you may be mistaking my comment for a different opinion that you disagree with.
> [...]
> In any case, before commenting on a complex subject of research with a long history, my recommendation remains to first become well acquainted with the subject and its history. Otherwise, one runs the risk of appearing confused.
Sorry, I did misunderstand what you were saying.
I do find it somewhat baffling how you're talking as if 1980s terminology is today's terminology, or that everyone should just recognize it as the default way of things. As much as you argue I should adopt the subject's history, I'm saying you should adopt its present.
The point you're making would have held fine two decades ago, but as a description of modern ML, it's like calling a sedan a cart. Even if you don't mean it as a criticism (‘just a cart without a horse’), the world has moved on.
>> Sorry, I did misunderstand what you were saying.
Thank you for your honesty and there's no need to apologise.
Let me explain my point above, if I can. You said that search, planning, expert systems, etc, are "not AI, and they never will be". I understand that as saying that such systems are not artificial intelligences, in the sense of an intelligent machine created by humans out of whole cloth (without trying to define what "intelligent" means).
That is certainly true, but it is also uncontroversial that the above are sub-fields of the field of research that is known as "AI". That is, there is a field that researches approaches that could lead to the creation of intelligent machines and that is called "AI" and then there's the ultimate target of that field which is to create "AI".
My original comment bemoans the fact that in some sectors, "AI", as a field of research, has become synonymous with only one sub-sub-field of AI research, that is, deep learning.
Contrary to what you state, these "GOFAI" fields (symbolic AI, if you will) are still active and far from having "failed" in any way, they are "SOTA" in their respective tasks. For example, the field of automated theorem proving is dominated by systems that employ the resolution rule, a logic-based approach and while recent efforts have been made to make inroads into theorem proving with deep neural nets (e.g. a recent attempt to use transformers) results are still very far from being comparable to the traditional approaches. I know more about automated theorem proving that I know e.g. about planning or search (because my PhD is in a subject close to the former) but my understanding is that in those fields too, traditional techniques dominate- which is why research in them is still active.
If I am permitted to tout my own horn, my broader subject area can be described as "program learning", i.e. machine learning of arbitrary programs from examples of their inputs and outputs. In this area also, deep learning systems are hopelessly outclassed by symbolic approaches, not least because these approaches learn precise representations of target programs (rather than approximations) from a mere handful of examples (four or five, etc).
And so on. These are just some examples of AI research that is ongoing, that is not deep learning, and that is state of the art for its respective tasks. In view of that, I consider that using "AI" to mean "deep learning" (as the article above does) is not only misinformed but also misinforming and actively harmful. In the sense of harming people's understanding, that is.
As to your comment about how GOFAI "failed", I'm afraid this opinion, which is common, is also misinformed. Here, too, a knowledge of the history of the field helps, but to summarise, the last winter happened because of strictly political reasons and for no reason that had anything to do with the scientific merits, or practical successes of the relevant approaches. In fact, expert systems, now widely considered a failure, were one of the first big success stories of AI; a success story that was cut short only because, again, of political reasons.
I could talk about the AI winter subject for hours (it's a favourit subject of mine) but a good starting point is this article by the editor of the IEEE journal Intelligent Systems: (Avoiding Another Winter) https://www.computer.org/csdl/magazine/ex/2008/02/mex2008020.... The wikipedia page on https://en.wikipedia.org/wiki/AI_winter also has a decent summary and some sources. Finally, see the wikipedia article on the Lighthill Report https://en.wikipedia.org/wiki/Lighthill_report which is more relevant to AI research in the UK (the Report killed AI research in the UK, dead) and this review of the report by John McCarthy: http://www-formal.stanford.edu/jmc/reviews/lighthill/lighthi... (man who coined the term AI and also created LISP on the side).
>> As much as you argue I should adopt the subject's history, I'm saying you should adopt its present.
More to the point, I'm recommending that you should know a subject's history before forming a strong opinion about its present and future. As to me, I'm firmly rooted to the present. About half of the literature I read is neural networks- and that's not even the subject of my research. But if you think about it, in the era where the trend is to use deep learning, the most interesting results can only come from not using deep learning. In a gold rush, if everyone is digging up Widow's Creek, then Widow's Creek is the last place to dig for gold.
I don't want to dismiss automated provers, as they are often quality, useful tools (SAT solvers in particular), but if you're interested in learning AI, traditional approaches are no longer more than briefly and tangentially relevant.
That's my point, that you don't need to learn woodworking to build a car, even if wooden carts still have occasional uses, and some cars have wood trim or wooden trailers.
> If I am permitted to tout my own horn, my broader subject area can be described as "program learning", i.e. machine learning of arbitrary programs from examples of their inputs and outputs. In this area also, deep learning systems are hopelessly outclassed by symbolic approaches, not least because these approaches learn precise representations of target programs (rather than approximations) from a mere handful of examples (four or five, etc).
I have looked at the program synthesis literature before and it really does not seem very advanced to me. The General Program Synthesis Benchmark Suite lists unsolved benchmarks like “Given three strings n1, n2, and n3, return true if length(n1) < length(n2) < length(n3), and false otherwise”, and that's with 100 examples. So, probably less practically useful than GPT-3, which wasn't even trained on the task.
> the last winter happened because of strictly political reasons and for no reason that had anything to do with the scientific merits, or practical successes of the relevant approaches
I disagree, but you've not given me much more than a list of vague references that don't all exactly support your argument, so I don't have much clue where you diverge.
If GOFAI worked, we'd see some indication of it working (again, in an AI context), but we don't.
> But if you think about it, in the era where the trend is to use deep learning, the most interesting results can only come from not using deep learning. In a gold rush, if everyone is digging up Widow's Creek, then Widow's Creek is the last place to dig for gold.
This analogy doesn't work. Neural networks are giving unparalleled results by the bucket. That's why people are digging there. A gold mine might have plenty of competing miners, but it's sure going to be a lot more likely to give you chunks of gold than a random patch of grass in your backyard.
I'm happy to see we're still in healthy disagreement. However, I have to apologise for confusing you by describing my field as "program learning" which is admittedly vague, but I didn't want to go into the particulars. My field is not program synthesis, which is constructing programs from complete, formal specifications. Rather, it's Inductive Programming and more specifically Inductive Logic Programming (ILP), which is learning programs from examples, i.e. "incomplete specifications". I'm not familiar with the General Program Synthesis Benchmark Suite, but the problem you list (test three strings are ordered by length) is trivial for ILP approaches. Again, I don't want to point to my own research, directly (I'm going through a bit of a modesty phase) (oh, alright, it's just that the documentation of my project is crap). However, I have to say that even so, if something is a difficult problem for program synthesis approaches, then it's very unlikely that neural networks will do any better at it. For instance, do you know how well deep neural nets perform on this benchmark? I can't find any results with a quick search.
You make the point that one does not need to learn these "obsolete" AI approaches because they are not relevant anymore. I don't understand why you say that. These approaches are still state of the art for their respective tasks and there is no other approach that has been shown to do any better, including deep neural networks. In what sense are they "no longer more than briefly and tangentially relevant" as you say?
Regarding the gold rush, the point of the analogy is that in a gold rush only a very few people will ever strike gold. This is exactly the state of research into deep learning currently. After a few initial big breakthroughs, like CNNs and LSTMs, progress has stalled and the vast, vast majority of published papers (or papers put on arxiv permanently) present incremental results, if that. Literally thousands of deep learning papers are published each month and the chance to have an impact is miniscule. From my point of view, as a researcher, going into deep learning right now would be career suicide. Not to mention that, while the first few successes were achieved by small academic teams, who had typical academic motives (er, glory), the game has now passed to the hands of big corporate teams that have quite different incentives, so it's almost impossible for small teams or individual researchers to make a dent.
As to the winter and whether GOFAI works, perhaps I haven't convinced you with my sources, but in that case, I have to go back to my earlier question and ask where your knowledge comes from. You clearly have a strong opinion on GOFAI and the AI winter of the '80s, but what knowledge does this opinion come from? Can you say? And if this sounds like a challenge, well, that's because it is. I'm challenging you to re-examine the basis of your convictions, if you like. Because to me, they sound like they are not well-founded and that you should put some water in your wine. The things you say "don't work", work and the things you say work, don't work as well as you say.
For my part, I certainly agree that GPT-3 or the next iteration of a large transformer-built language model can be a useful tool, but such a tool will always be limited by the fact that it's, well, a language model, and it can only do what language models do, which does not include e.g. the ability for reasoning (despite big claims to the contrary) or arithmetic (ditto) or generation of novel programs. For instance, the append() example you show above is clearly memorised: you haven't given the model any examples of append(), so it can't possibly learn its definition from examples. It only returns a correct result because it's seen the results of append() before. Not the same result, but close enough. Like I say, this ability can definitely be useful- but its usefulness is limited compared to the ability to learn arbitrary programs, never before seen.
btw, why do you need to give it the list "a"? What happens if this is ommitted from the prompt?
> Rather, it's Inductive Programming and more specifically Inductive Logic Programming (ILP), which is learning programs from examples, i.e. "incomplete specifications". I'm not familiar with the General Program Synthesis Benchmark Suite, but the problem you list (test three strings are ordered by length) is trivial for ILP approaches.
The General Program Synthesis Benchmark Suite works from input-output examples, not “complete, formal specifications”.
How would you tackle this with ILP?
> However, I have to say that even so, if something is a difficult problem for program synthesis approaches, then it's very unlikely that neural networks will do any better at it. For instance, do you know how well deep neural nets perform on this benchmark?
I'm not aware of any serious at-scale attempts. Your option is basically to try few-shot with GPT-3.
OTOH, learning these trivial programs from 100 examples is a largely artificial framing used to support a field which hadn't worked its way up to meaningful problems, and in the more general sense, large networks are promising; eg. the GitHub-trained GPT:
https://www.youtube.com/watch?v=y5-wzgIySb4
or any of the GPT-3 programming demos:
https://twitter.com/sharifshameem/status/1284103765218299904 https://twitter.com/sharifshameem/status/1284815412949991425 https://www.reddit.com/r/commandline/comments/jl8jyr/the_nlc...
> These approaches are still state of the art for their respective tasks and there is no other approach that has been shown to do any better, including deep neural networks. In what sense are they "no longer more than briefly and tangentially relevant" as you say?
“if you're interested in learning AI”
These techniques were invented from the field of AI, but that does not mean they remain in the field of AI.
> You clearly have a strong opinion on GOFAI and the AI winter of the '80s, but what knowledge does this opinion come from? Can you say?
I can argue why ML approaches are good and promising and point at that. I can argue why ML approaches make conceptual sense whereas GOFAI does not, though I don't see us resolving that short-term so I'd rather not. But what I can't so easily do is point to the non-existence of GOFAI AI successes. It's just not there.
You do have tools Watson and WolframAlpha which use GOFAI techniques for fact search over a large set of human-built knowledge repositories (trivia q's / math tools), but Watson is mostly considered a stunt, and I'm not aware of anyone calling WolframAlpha AI.
> the ability for reasoning (despite big claims to the contrary)
The nebulousness of the term ‘reasoning’ is pulling a lot of weight here. It's clearly doing sophisticated computations of some sort, beyond brute memorization.
> or arithmetic (ditto)
http://gptprompts.wikidot.com/logic:math#toc6
There are more examples too, this is just addressing the one point people get wrong most often. BPEs are an interim performance hack, not an indictment on the approach in general.
> or generation of novel programs
Is clearly false.
> For instance, the append() example you show above is clearly memorised: you haven't given the model any examples of append(), so it can't possibly learn its definition from examples.
This is true, but it's mostly just an artifact of me having to prompt it through FitnessAI. Unlike smaller models, few-shot learning works, it just takes more space than I have to prompt with.
See the GitHub-trained example for something that integrates with more arbitrary code. There are many other examples, like the database prompt below (all bold is human input), or see some of the examples I linked above.
https://www.gwern.net/GPT-3#the-database-prompt
Or I can ask
Q: “If z(str) = str + " " + str + " z" (for example, z("dumbell") = "dumbell dumbell z"), and g(str) = "k " + str + " j" then what is g("run")?”
A: “g("run") = "k run j"”
(The inverse problem doesn't work so well, giving “g(str) = "k run j"” for one example (valid but vapid) and “g(str) = "k str j"” for two (close but no banana), and confusion for more complex prompts, though I suspect the format is partially to blame. I can list other failure cases. But my point isn't that GPT-3 is reliable here; it's a language model.)
> btw, why do you need to give it the list "a"? What happens if this is ommitted from the prompt?
That example was from me trying to emulate an example I saw on Twitter I've since lost, which was a similar thing but multi-step, where each step GPT-3 returned all three lists, modified or queried per the given commands.
Omitting `a`, I get
Q: “b = ["lifting", "curls", "squats"], c = ["running", "jogging"], so what is b after b.append("pushups")?”
A: “lifting,curls,squats,pushups”
I had to change the prompt a bit because initially the result was truncated (FitnessAI is not made for this), or said “b.append("pushups") will add the string "pushups" to the end of b.”, which is correct but not what I wanted.
Few-shot would fix formatting inconsistencies; right now the model is just guessing.
>> These techniques were invented from the field of AI, but that does not mean they remain in the field of AI.
Like I say above, it is pretty uncontroversial that these approaches are part of the field of AI research. You can consult wikipedia or e.g. the calls for papers from major AI conferences, AAAI and IJCAI, if in doubt.
So I have to ask again, why do you say these approaches are are not in the field of AI research? According to whom? And based on what?
I would please like an answer to the above question.
Further, I can certainly point you to successes of symbolic AI, which you say don't exist. For one thing, the entire fields of automated theorem proving, planning, search, game playing, knowledge representation and reasoning, etc. that you say are "not AI", but are like I say still active and still state of the art in their respective tasks. These are certainly successful- they have produced systems and techniques that still work best than any alternative and actually quite well.
For examples of specific systems that were successful in their time, see Logic Theorist [1] that proved 38 of the first 52 theorems in Principia Mathematica; Chinook [2], the first computer program to win a world championship against humans (in checkers/draughts); Deep Blue [3], the first AI system to defeat a human grandmaster (Garry Kasparov) in chess; MYCIN [4] the first AI system to outperform human experts in disease diagnosis (specifically, diagnosis of infections); and so on.
Of course these systems have been superseded - but they were successes nonetheless. Another reason to learn the history of AI is to become aware of those systems- they, indeed, were "there".
Again I have to ask you- where does your knowledge of AI come from? When you make such strong statements about what works and what doesn't, what failed and what succeeded, are you sure you are well informed? Do you draw your knowledge from primary sources, or are you trusting the opinions of others who claim to be experts- but may not be (like in the article above)?
>> How would you tackle this with ILP?
Below I've defined the problem in the format expected by Louise [5]:
?- list_mil_problem(ordered/3).
Positive examples
-----------------
ordered([a],[b,c],[d,e,f]).
Negative examples
-----------------
[]
Background knowledge
--------------------
shorter/2:
shorter(A,B):-length(A,C),length(B,D),C
Given this problem definition, Louise can learn the following (Prolog) program: ?- learn(ordered/3).
ordered(A,B,C):-shorter(A,B),shorter(B,C).
true.
To explain, shorter/2 is a predicate defined as background knowledge by me.
triadic_chain is a metarule, a second-order clause that provides inductive
bias. length/2 is an ISO Prolog predicate.Like I say, this is a trivial problem, not least because its solution is easy to figure out and the background knowledge and metarules are trivial to define by hand. Louise can also perform predicate invention to define new background knowledge (kind of like inventing new features) and also new metarules. That is to say, Louise can learn the shorter/2 and length/2 programs, also from very few examples- and then reuse them as background knowledge. But showing how to do that would make for a larger example. I'm happy to oblige if you are curious.
I should point out that there exists no neural net approach that can learn the same (or a similar) program from a single positive example- not least because neural nets cannot make use of background knowledge (i.e. a library of programs from which to build other programs).
__________________
[1] https://en.wikipedia.org/wiki/Logic_Theorist
[2] https://en.wikipedia.org/wiki/Chinook_(computer_program)
[3] https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparo...