It is quite unpredictable which languages get into academia. It depends on getting through many committees, how long it takes teachers to learn it, and whether there is any pressure from outside to learn a particular language. There are also time pressures; there is a limit to how much you can teach anybody at once without overloading them.
Both Edsger Djikstra and Donald Knuth invented their own "languages" because they wanted to be able to illustrate their concepts without the distraction of real-world language idiosyncrasies. Alas, Knuth went for a pseudo-assembler (intended to run on a VM), and no assembler makes a really good high-level concept illustrator. Djikstra used a pseudo-Algol.
When I took languages originally, they taught languages that real-world computer shops in town were using. Later, however, when I went back, the language of choice for introductory concepts was Modula-2, which as far as I know never really had much commercial or scientific application, but was, like Djisktra's pseudo-Algol, good for general concepts.
Every school is going to do things differently, though, and things change rapidly. So only time will tell.
Some people, when well-known sources tell them that fire will burn them, don't put their hands in the fire.
Some people, being skeptical, will put their hands in the fire, get burned, and learn not to put their hands in the fire.
And some people, believing that they know better than well-known sources, will claim it's a lie, put their hands in the fire, and continue to scream it's a lie even as their hands burn down to charred stumps.
Almost no one I know in software development is using the language(s) they were taught in academia -- and I actually don't think most "real world" programming languages are good for teaching about programming fundamentals. When I did my BSc in (Math and) Comp Sci, we did a little assembler to learn about the machine level (and we studied different machine architectures), we did a tiny bit of BASIC in the first year just to support Math coursework, and a lot of Pascal over the three years, so we could learn about pointers and memory (stack vs heap) and data structures and algorithms, problem decomposition, and so on.
The University of Washington has a CS304 course available online as "Programming Languages" which teaches fundamentals of various styles of programming -- statically typed functional (Standard ML), dynamically typed functional (Racket), and dynamically typed object-oriented (Ruby), on the assumption that students will encounter plenty of the "fourth quadrant" (statically typed object-oriented) once they get out into the commercial world.
I learned about a dozen languages at university (in my spare time, for fun), and I've never used any of them commercially. Over my career, I've used about a dozen languages in production -- and I've had to learn new languages to stay employable (my arc was loosely COBOL -> C -> C++ -> Java, with ColdFusion, Groovy, Scala, and Clojure since then, all on the JVM), and I've tried to follow the advice in "The Pragmatic Programmer" to "learn a new programming language every year". It's a lofty goal and I've typically only managed one language every two years, but over the last decade I've played with Elm, Go, Rust, and Kotlin (and was only disappointed with Go). I think learning new programming languages generally improves your skill in your "home" language, because you get to look at problems in a different way.
At this point, the programming language space is pretty crowded -- and only likely to get more so, I suspect -- so being a polyglot just makes sense from a career point of view.
I spent the morning putting in a comma and the afternoon removing it.
-- Gustave Flaubert, French realist novelist (1821-1880)