Mojo Request I'm not trying to depress you, it's just that if you're still thinking of AI as "glorified Google search", then you're really not keeping up with this

I hope this is the case, and I hope I'm not coming off as an AI cheerleader here. I do appreciate the efficiency and drudgery-reduction it has brought to my job, but on the balance I'm very scared of the near-term implications of the speed at which this is improving combined with the hunger by companies to maximize profits. Why hire a team of coders when "AI can do it" for a fraction of the price - and unlike a year or two ago, AI actually can do it at least as good as that team of coders, if not better? "Well they can train for other jobs" - which ones? The number of industries this is going to impact beyond computer programming in the next few years is going to be very large...

It really sucks and I (and other friends of mine who are more tech-adjacent in their jobs than I am) are literally losing sleep over it. I get the comparison to earlier technologies / apps / platforms, but this seems inherently different to me at a very foundational level. I would love to be proven wrong...

What we need is a very big wrench that we can throw into the gears of a very big data center. Or a very large guillotine.
 
It really sucks and I (and other friends of mine who are more tech-adjacent in their jobs than I am) are literally losing sleep over it. I get the comparison to earlier technologies / apps / platforms, but this seems inherently different to me at a very foundational level. I would love to be proven wrong...
I think it will be closer to industrialization than, say, iTunes.
 
Predictions are usually wrong or very wrong. This one is seriously oversold or a force that you won't be able to avoid. I'm impressed by the technology and if it improves at the current rate it will be unavoidable very soon.
I'm glad I'm retired. If not I would be thinking about a job shift to something implimenting AI.

When email was brand new I had a VP of R&D boss who said email was stupid and not going to last. He had his admin assistant print all his emails. He would hand write a reply then have it typed back into email. What an idiot. Also who promotes a proud Luddite to the head of technolgy development?
 
I might be seeing the fallout from this current state of vibe coding first hand in the next few weeks.....I hope not. I also won't be surprised if the decision makers, far removed from the work, decide paying someone offshore 1/2 -1/3 what I make to let claude vibe code all day makes $ence.

Edit: I'm legitimately impressed with the improvements we've seen with the IDE integrated LLMs over of the past few months. We're still seeing that in most situations that deviate from public & private facing documentation (or relies on highly distributed documentation that is not obviously linked), an experienced dev is needed to get the project past the LLM's limits. But yeah, those limits are way different then they were just months ago.
 
Last edited:
We had an asshole at work that I reprimanded for studying for his classes instead of actually doing the job he was paid for. Every night or late shift we had he would wait for me to leave and then set up an office space complete with laptop and binders full of materials, do his studies and leave 75% of the work to his colleagues and not even have the concept of hiding this distraction from work when something like the Military Police come through the guardhouse.

People complained, I asked him how long he thought this would go on and he got pissed off and said (sorta yelled) that he had another 2 semesters.

I put an end to it, he proceeded to attempt at any turn to make my workplace hell. A truest narcissist I ever met in person.

What was he learning? Website development
.
I really shouldn't, but I sort of chuckle every time I see an ad for those AI, one click website development apps. And no, it's not him doing the apps. We have folks that are still friends with him.
 
The advancements we've seen in AI over the last couple of years have been impressive. Considering this technology is still in its infancy, who knows where we'll be in another 5 - 10 years. One take is that it will put a lot of folks out of work, but another is that new things are created, it will require workers to implement them. Time will tell I suppose.

One of my primary concerns is how it will contribute to the dumbing-down of our youth. Being a geezer, I look back at my school research assignments as time spent in the library, pouring over texts and extracting relevant material. A side effect of that process was gaining tangential information on various subjects that did not directly apply to the research topic but added to one's general knowledge. When prompted correctly, AI is very good at providing focused answers without a lot of clutter. But sometimes, the clutter is the best part.

I don't have school aged children anymore, but I see reports of kids just regurgitating whatever ChatGPT tells them when doing an assignment. They're not really learning anything other than how to prompt the agent. But then, maybe that's a skill in itself. :shrug:
 
The advancements we've seen in AI over the last couple of years have been impressive. Considering this technology is still in its infancy, who knows where we'll be in another 5 - 10 years. One take is that it will put a lot of folks out of work, but another is that new things are created, it will require workers to implement them. Time will tell I suppose.

One of my primary concerns is how it will contribute to the dumbing-down of our youth. Being a geezer, I look back at my school research assignments as time spent in the library, pouring over texts and extracting relevant material. A side effect of that process was gaining tangential information on various subjects that did not directly apply to the research topic but added to one's general knowledge. When prompted correctly, AI is very good at providing focused answers without a lot of clutter. But sometimes, the clutter is the best part.

I don't have school aged children anymore, but I see reports of kids just regurgitating whatever ChatGPT tells them when doing an assignment. They're not really learning anything other than how to prompt the agent. But then, maybe that's a skill in itself. :shrug:
I hate to be the doom and gloomer here, but I think we're cooked. Just as in the job market, where AI is currently replacing entry level white collar positions, which means there's no path to middle and senior level positions (and no replacement for institutional memory once the elder generation retires), so to in the education arena where AI is replacing "entry level tasks," such as reading, writing, and arithmetic. You might say, "oh it's just like the calculator," but that's not true. The calculator didn't think for you. It crunched numbers, but you still had to know how to input the data. Many students are arriving at my college in a state I would call "functionally illiterate" for college level education. They (the "they" here meaning these students who arrive in such a state, not every student) cannot problem solve. They can't sustain attention spans to read short texts (let's say a ten page James Baldwin essay or short story) and while of course they can technically read the words on the page, the reading comprehension is abysmal. There will be some people who retain skills and know how to think beyond what already exists, but we will have a growing population that is dependent on the machines (and more importantly the techno-oligarchs who direct them). Worst case scenario is that the machines, rather logically, determine that the biggest drag on the planet is the human population and they set about to remedy that. Best case scenario is that capitalism falls and the machines discover the all the people on the planet can actually live moderately comfortable and resourced lives if only <.3% of the population weren't controlling all the resources and they set about to remedy that.

Now I'm really depressed heading into the weekend.
 
The advancements we've seen in AI over the last couple of years have been impressive. Considering this technology is still in its infancy, who knows where we'll be in another 5 - 10 years. One take is that it will put a lot of folks out of work, but another is that new things are created, it will require workers to implement them. Time will tell I suppose.

One of my primary concerns is how it will contribute to the dumbing-down of our youth. Being a geezer, I look back at my school research assignments as time spent in the library, pouring over texts and extracting relevant material. A side effect of that process was gaining tangential information on various subjects that did not directly apply to the research topic but added to one's general knowledge. When prompted correctly, AI is very good at providing focused answers without a lot of clutter. But sometimes, the clutter is the best part.

I don't have school aged children anymore, but I see reports of kids just regurgitating whatever ChatGPT tells them when doing an assignment. They're not really learning anything other than how to prompt the agent. But then, maybe that's a skill in itself. :shrug:
You know, that's one of the things I should be more thankful for. . . my mom read to me since before I was born, and we did the same to our kids, and they are both really good readers. Still spend way too much time staring at screens, but I can't really talk either as I'm either usually on youtube or minecraft in my spare time. Trying to get better about that.

I dunno about the "AI" thing. LLMs might be getting better but they still can't think. They can't create. They can just mash up pre-existing content that was stolen scraped off the web for the most part. They might be getting better at not hallucinating, and obviously there's a use case for it, but like @Chicken Man said, we'll see what happens when the investment bubble bursts. Also, how long are people going to put up with the ridiculous power and water needs of these data centers? Out here in the west there are already problems with water usage.
 
I hate to be the doom and gloomer here, but I think we're cooked. Just as in the job market, where AI is currently replacing entry level white collar positions, which means there's no path to middle and senior level positions (and no replacement for institutional memory once the elder generation retires), so to in the education arena where AI is replacing "entry level tasks," such as reading, writing, and arithmetic. You might say, "oh it's just like the calculator," but that's not true. The calculator didn't think for you. It crunched numbers, but you still had to know how to input the data. Many students are arriving at my college in a state I would call "functionally illiterate" for college level education. They (the "they" here meaning these students who arrive in such a state, not every student) cannot problem solve. They can't sustain attention spans to read short texts (let's say a ten page James Baldwin essay or short story) and while of course they can technically read the words on the page, the reading comprehension is abysmal. There will be some people who retain skills and know how to think beyond what already exists, but we will have a growing population that is dependent on the machines (and more importantly the techno-oligarchs who direct them). Worst case scenario is that the machines, rather logically, determine that the biggest drag on the planet is the human population and they set about to remedy that. Best case scenario is that capitalism falls and the machines discover the all the people on the planet can actually live moderately comfortable and resourced lives if only <.3% of the population weren't controlling all the resources and they set about to remedy that.

Now I'm really depressed heading into the weekend.
I saw a story about this just recently, but was unable to easily locate it to post here (maybe I should have asked AI, LOL).

Anyway, the jist is that in the "old days" the SAT reading comprehension tests would present a ~1000 word text of a few paragraphs followed by several questions regarding the context of the text. The reader had to understand the material present to answer correctly. The SAT has been updated to now present a short ~50 word, few sentences of text with a single question about it. This is repeated several times. So basically, they have dumbed-down the test to get folks to score better. A sign of the times. :old:
 
I hate to be the doom and gloomer here, but I think we're cooked. Just as in the job market, where AI is currently replacing entry level white collar positions, which means there's no path to middle and senior level positions (and no replacement for institutional memory once the elder generation retires), so to in the education arena where AI is replacing "entry level tasks," such as reading, writing, and arithmetic. You might say, "oh it's just like the calculator," but that's not true. The calculator didn't think for you. It crunched numbers, but you still had to know how to input the data. Many students are arriving at my college in a state I would call "functionally illiterate" for college level education. They (the "they" here meaning these students who arrive in such a state, not every student) cannot problem solve. They can't sustain attention spans to read short texts (let's say a ten page James Baldwin essay or short story) and while of course they can technically read the words on the page, the reading comprehension is abysmal. There will be some people who retain skills and know how to think beyond what already exists, but we will have a growing population that is dependent on the machines (and more importantly the techno-oligarchs who direct them). Worst case scenario is that the machines, rather logically, determine that the biggest drag on the planet is the human population and they set about to remedy that. Best case scenario is that capitalism falls and the machines discover the all the people on the planet can actually live moderately comfortable and resourced lives if only <.3% of the population weren't controlling all the resources and they set about to remedy that.

Now I'm really depressed heading into the weekend.
 
One dangerous aspect of AI is that it learns from all existing information. The concept of Net Zero Carbon Emissions may seem innocent enough. What could possibly go wrong?

Life on planet earth is carbon based. Once AI becomes self aware, and decides to take Net Zero to its logical conclusion, the sterilization of all life begins. Zero carbon. Zero life. :eek:


IMG_3441.jpeg
 
One dangerous aspect of AI is that it learns from all existing information. The concept of Net Zero Carbon Emissions may seem innocent enough. What could possibly go wrong?

Life on planet earth is carbon based. Once AI becomes self aware, and decides to take Net Zero to its logical conclusion, the sterilization of all life begins. Zero carbon. Zero life. :eek:


View attachment 118528
Well, as long as we stop exhaling CO2, a greenhouse gas, we should be OK.
 
One dangerous aspect of AI is that it learns from all existing information. The concept of Net Zero Carbon Emissions may seem innocent enough. What could possibly go wrong?

Life on planet earth is carbon based. Once AI becomes self aware, and decides to take Net Zero to its logical conclusion, the sterilization of all life begins. Zero carbon. Zero life. :eek:


View attachment 118528

There are two problems with this. Net zero is a human-derived value. And also absolutely doesn’t require elimination of carbon life forms. Just a balance of emissions and removal. Hence net zero not zero zero.

At most an autonomous ai would just force itself to use less energy and/or redirect some resources to removal. But it’s unclear why it would want to apart from the value of human survival.

The real danger is ai creating it’s own values. Or people believing it’s values are superior. And then it abandoning something like long term human survival. Which is already kind of the case with tech companies.
 
They’re saying 1/5 of Virginia’s power is being used by data centers now. Also saying that with the current grid, power plants and green energy will not continue to support it. Said more green energy will not keep up with it and if more are coming then nuclear or coal plants will need to be built.

You watch, soon Gates and others heavily invested will say green house gases are not as severe as once thought and we should build more power plants to serve the data centers.
 
Last edited:
There are some who argue the machines already won. . . back at the peak of modern civilization. . . in 1999. Crap, that wasn't when they won, but you get the idea :bigg:

1773453794554.png
 
Last edited:
LLMs might be getting better but they still can't think. They can't create. They can just mash up pre-existing content that was stolen scraped off the web for the most part. They might be getting better at not hallucinating,
it’s still really easy to poison the LLMs, and it doesn’t take much to ruin even a very large model.
 
They’re saying 1/5 of Virginia’s power is being used by data centers now. Also saying that with the current grid, power plants and green energy will not continue to support it. Said more green energy will not keep up with it and if more are coming then nuclear or coal plants will need to be built.

You watch, soon Gates and others heavily invested will say green house gases are not as severe as once thought and we should build more power plants to serve the data centers.
They should be regulated to stay off the power grid and generate their own power with renewable energy technology.
 
They’re saying 1/5 of Virginia’s power is being used by data centers now. Also saying that with the current grid, power plants and green energy will not continue to support it. Said more green energy will not keep up with it and if more are coming then nuclear or coal plants will need to be built.

You watch, soon Gates and others heavily invested will say green house gases are not as severe as once thought and we should build more power plants to serve the data centers.
The plan for Stargate (a cluster of data centers being built in Texas) is four nuclear reactors. They’re going to build one to get it figured out and then build the other three simultaneously. And once the builders have proven that they can bang out three reactors at one time they’ll have no trouble finding big clients who want more. Assuming we don’t end up with a no-nukes president. But even without public financing building reactors takes years. I don’t think that people in the USA want to go near coal because potential future regulation could make it so expensive that they would have been better off building more reactors. And there will probably be small modular reactors going into production in the next few years which will at least be able to power small data centers for people who don’t need to host a million computers in one huge building.

But Elon might be right about the future of data centers being space. Starship could bring launch costs down so low that it will be worth it. At least until Kessler Syndrome becomes a problem.
 
There are two problems with this. Net zero is a human-derived value. And also absolutely doesn’t require elimination of carbon life forms. Just a balance of emissions and removal. Hence net zero not zero zero.

At most an autonomous ai would just force itself to use less energy and/or redirect some resources to removal. But it’s unclear why it would want to apart from the value of human survival.

The real danger is ai creating it’s own values. Or people believing it’s values are superior. And then it abandoning something like long term human survival. Which is already kind of the case with tech companies.

A 3rd problem is that my post was tongue in cheek. It was a carbon based joke that was not meant to be taken seriously.

They should be regulated to stay off the power grid and generate their own power with renewable energy technology.

“Renewable” energy cannot keep up with mankind’s consumption. The math doesn’t work. Of course, that doesn’t mean we should drop it. We need to be sensible about all energy sources and solutions. Every type of energy has downsides and non-monetary costs.

For example, clearing acres of land, displacing plants and wildlife for acres full of solar panels is not a great concept. The costs to nature are high. But, solar panels belong on the tops of the roofs of buildings.

Bill Gates has softened his positions on green energy from hardline doomsday predictions and quick, extreme cuts in emissions, to more gradual, long term innovative solutions. He has come to believe that the extreme positions have costs in terms of human welfare, particularly for developing countries.
 
The plan for Stargate (a cluster of data centers being built in Texas) is four nuclear reactors. They’re going to build one to get it figured out and then build the other three simultaneously. And once the builders have proven that they can bang out three reactors at one time they’ll have no trouble finding big clients who want more. Assuming we don’t end up with a no-nukes president. But even without public financing building reactors takes years. I don’t think that people in the USA want to go near coal because potential future regulation could make it so expensive that they would have been better off building more reactors. And there will probably be small modular reactors going into production in the next few years which will at least be able to power small data centers for people who don’t need to host a million computers in one huge building.

But Elon might be right about the future of data centers being space. Starship could bring launch costs down so low that it will be worth it. At least until Kessler Syndrome becomes a problem.

"building reactors takes years."

This isn't in the politics forum so, I'll just say, I can imagine a scenario where that could be cut down to a month.
 
Back
Top