I don't doubt that a fair amount of boilerplate "churn-alism" is computer generated, but so what? There still has to be some original material to feed into the generators, although there are certainly going to be feedback loops as well. All the hoo-hah about Google Translate (another AI success story) tends to ignore the key fact that it only works because humans generate the original content that it can use for its statistical analysis. No "intelligence" there, just a lot of number-crunching that is ultimately about encouraging you to look at Google sites, feed their machines with extra data, and consume the adverts they're paid to sell. This is true of a lot of the AI hype we see in the tech press, and our industry must surely be the most over-hyped in history.
More importantly, Sturgeon's Law applies i.e. 90% of everything is crap. Personally, I don't much care if online content is generated by humans or machines, because it's mostly crap anyway. People have been generating crap for centuries, and most "journalism" has long been a way of filling gaps between ads, driven by what sells and what the proprietor wants. People already swallow this stuff without much thought as to whose interests are served by the lead stories. Which brings us to the question of who's in charge.
Machines may be generating "content", but they're not in control, because they are being operated by and for people and organisations who want your attention and your dollars. Manipulation of public opinion, the markets and pretty much anything else has always been exploited by the powerful to ensure that they retain and accrue as much wealth (= power to grab more wealth) as they can. Follow the money and you'll find the most important feedback loop in the process: it's all about the money, and it always has been.
So you're right, there are lots of new tools to automate a process of moving wealth from poor to rich that has always existed, but they are still ultimately serving the goals of their owners, because that's what they're for in the first place. So, for example, flash-crashes on computerised algorithmic trading are a problem, but there are already moves afoot to limit these systems, because the people who benefit from them want to know that their wealth is not going to be wiped out in a flash. We've had high tech nuclear weapons for 70 years, but the reason we haven't had nuclear Armageddon is that powerful people realise it's not in their interests, so they limit access to nukes and (generally) manage them more carefully. So I'm sure we'll see plenty more examples of feedback problems, weird market movements, swings in public opinion, malfunctioning autonomous systems (battle robots, anybody?) and so on. But they will still mainly be about serving the interests of the powerful, and the rich will act to secure their interests if these things threaten the status quo. In the end, I think real-world power will trump processor power every time.
And this is a First World Problem anyway: all this talk of the Singularity and billions of people still don't have access to clean
water or reliable energy. If Ray Kurzweil and his tech-boosting ilk want to solve a Hard Problem, they could start there.