Archive link: https://archive.ph/gsvf3
BEIJING, May 12 (Xinhua) – China will establish a tiered AI education system spanning primary, junior high, and senior high schools to guide students from foundational cognitive awareness to practical technological innovation, according to policy documents unveiled Monday.
At the primary school level, the Ministry of Education (MOE) prioritizes AI literacy through exposure to basic technologies, such as voice recognition and image classification. Building on this foundation, junior high school students will deepen their understanding of AI logic, examine machine learning processes, and develop critical thinking to identify misinformation in generative AI outputs.
Progressing to senior secondary education, the focus shifts toward applied innovation. Students will use accumulated AI knowledge to design and refine AI algorithm models, while cultivating interdisciplinary systems thinking.
To achieve the goals, the MOE will integrate AI-enabled teaching competencies into the teacher training framework. Additionally, it mandates schools to develop age-appropriate curricula with tiered instructional practices that align with cognitive development stages.
Notably, the MOE underscores generative AI’s pedagogical potential. “Teachers can empower generative AI tools to construct interactive teaching and create immersive learning experiences,” said an official overseeing basic education.
The official also called for strengthening students’ logical and innovative thinking through generative AI-powered interactive learning ecosystems.
Meanwhile, the MOE prohibits students from submitting AI-generated content as academic work or examination responses. Simultaneously, it demands that teachers cultivate learners’ capacity for critical thinking of AI outputs, thereby fostering authentic engagement in information processing.
On r/Sino: https://www.reddit.com/r/Sino/comments/1krrjki/china_will_establish_a_tiered_ai_education_system/
When this came out westerners were crying about ‘muh AI’ and how this was a terrible decision - because only they, through actively refusing to understand AI, actually understand AI (don’t laugh!). The school board of the third biggest city in the country, 21 million people, is immediately, irrevocably wrong because westerners have decided if they don’t like AI, then nobody should like AI either – this project got its start in the Beijing school board before being nationalized as is often the case in China with pilot projects.
Westerners, including some ‘communists’, want schools to be places where kids only learn manual skills like how to file taxes, how to parallel park or how to cook a meal and nothing intellectual whatsoever. Beijing is not saying that kids will go into an LLM career and nothing else, they are giving them a little bit more taste of what exists in the world, what it has to offer. Some of them will be building prosthetics powered by AI, get a prize and discover a career in engineering.
But to us everything has to justify its own cost and profit-making ability, even schooling. Rail has to be self-sufficient in five years to be very tentatively approved, and the economic stimulus it provides is not considered at all in the equation: it has to cover its own cost of operation. If there is no immediate gain from it, then we don’t want it. And not only that, but we don’t want others to have it either.
But to us everything has to justify its own cost and profit-making ability, even schooling. Rail has to be self-sufficient in five years to be very tentatively approved, and the economic stimulus it provides is not considered at all in the equation: it has to cover its own cost of operation. If there is no immediate gain from it, then we don’t want it. And not only that, but we don’t want others to have it either.
This is one of the things that pisses me off so much about this society. It especially pisses me off when average people only think within those boundaries and then wonder how nothing ever happens and things start to crumble and stagnate around them.
If everyone actually followed this nonsense tomorrow then humanity would regress back to shit throwing monkeys in no time.
Personally I always rejected this shit because it never made any sense to me, and even now there are people who think lesser of me because of it.
You can show them the coolest thing they’ve ever seen and they’ll inevitably go “OK, but how are you going to make money with it?”. They’re completely incapable of seeing past that.
It really is like a fucking cult who calls anyone that gets shit done or even just pursues knowledge without a profit incentive a heretic.But to us everything has to justify its own cost and profit-making ability, even schooling. Rail has to be self-sufficient in five years to be very tentatively approved, and the economic stimulus it provides is not considered at all in the equation: it has to cover its own cost of operation. If there is no immediate gain from it, then we don’t want it. And not only that, but we don’t want others to have it either.
Well said! Neoliberal brain worms have penetrated every aspect of western society, even into many leftist circles.
westerners were crying about ‘muh AI’ and how this was a terrible decision
It’s because for most people, their only exposure to AI is generative AI, the kind which according to the MIT Technology Review takes as much energy to generate a 5-second video as “running a microwave for over an hour” (About 1 kWh) and has completely eroded some people’s critical thinking (the “@grok is this true?” people), and most people’s only knowledge of education on AI is “prompting”.
So people who don’t bother to read that the curriculum involves machine learning and misinformation identification are going to have the simplistic “AI bad” reaction.
There’s technically different ways to train models and they work different, but they’re all neural networks working on layers in the end. What I mean is ‘genAI’ isn’t really a thing beyond a vague boogeyman, which single it out as some unique ‘evil’ because detractors have to concede there’s actual uses for AI while still wanting to retain their apprehension against it. It doesn’t name the actual problem they have: either with big tech companies, or against the loss of their sense of superiority for not using AI. But if we have a problem with OpenAI, Anthropic, Amazon etc then we should be able to name them out and study them without lumping all of it into the ‘genAI’ label.
As an example when you use a sentence-transformer to turn a sentence into a tensor (an array of vectors in N dimensions, which gives the sentence semantic meaning in pure numbers), you’re using genAI… if genAI had an objective, measurable definition. The sentence transformer generates vectors out of your prompt, based on how the model was trained.
Yet you can use sentence-transformers for a lot of stuff that is not necessarily ‘generative’. Making a search engine, for example, which I did for a hobby project. I wouldn’t say Google is ‘generative’ though.
So what is genAI? It’s whatever one doesn’t like. That way they can distance themselves from ‘genAI’ while conceding the actual usecases of AI, because there are indeed objectively beneficial uses for it, and they can’t persist in denying that reality forever, lest they look like fools (like when twitter didn’t understand how image generation worked early on and tried to claim that it was pasting together pieces from thousands of different pictures. They moved on from that very quickly when they learned about noise diffusion.)
I know I’m a bit over the place because I haven’t synthesized this on paper yet, but basically I don’t like the distinction because it creates a divide between socially acceptable AI use and socially unacceptable AI use. But the difference doesn’t exist; bullying people into compliance is idealist and will not lead to lasting change, material conditions will.
This leads us to being able to talk about the electricity/water consumption. I don’t doubt the MIT’s findings, though I will say estimates are always only estimates and calculating actual, final energy use is difficult even when you have all the data available.
However like I often say, if we united all the countries of the world together, we could have the largest GDP in the universe. What I mean is that we must not miss the forest for the trees. 1 hour of running microwave seems like a lot because we usually don’t run the microwave for more than 3 minutes at home, but you know who runs microwaves all day long without a care in the world? The fossil fuel industry. Golf courses. The meat industry. A single grocery store throwing away hundreds of kilograms of food because it’s perishable has done more environmental harm than my microwave ever could in its lifespan of heating up my food.
Even gaming takes more power than running a local neural network, whether an LLM or an image diffusion model. Youtube is hosted in datacenters too, and some years back it was all the rage on Linkedin to try and shame proles for watching too much youtube because “watching one hour of youtube consumes as much power as leaving the lights on when you don’t use them! So think about that when you leave it on as background noise!”
We have to move away from individual citizen responsibility (i.e. instilling moral failure into people for not living up to some standard we impose on ourselves and each other) and towards systematic structural change. There is no ethical consumption under capitalism; people are allowed to watch Netflix and drive cars, and they will do it regardless of how many managers on Linkedin disapprove. That’s nothing compared to a billionaire flying a private jet for a 15 minute trip or the meat industry making a beef patty.
That’s not to say there aren’t issues with the way AI is treated in the West. The US, in its usual way, has given AI companies carte blanche to do whatever they want regardless of law. This is why datacenters pollute; people like Elon Musk buy gas turbines to power their data center because the US grid could not power them even if it wanted to. They normally need EPA approval for gas turbines, but they just don’t care because they can absorb the fine, and they figure they won’t even be hit with one. And so far that’s been true. Musk’s datacenter in Memphis has 12 turbines when deploying even 1 is already a huge deal. But that’s the US, it’s not new and it’s not the only way of doing things, it’s just theirs. In China they are installing the US grid’s equivalent of solar every 18 months, so it’s very likely that a substantial portion of Deepseek or GLM (z.ai) is powered by solar (I tried to look for more information once before but it doesn’t really seem to exist). But, if we limited ourselves to saying “oh it’s just how genAI is, genAI is bad for the environment” we would miss all of that and never study the problem deeper.
We agree though overall - there is lacking education in anything to do with AI and it’s going to be important to teach people (both in school and outside) about AI. I wanted to add this comment to answer @burlemarx@lemmygrad.ml’s comment as well.
Just to note, I am not part of the ludite group that thinks using AI is immoral. The problem of AI (and GenAI, more specifically) is basically the appropriation of labor (of both datasets created by humans and people working on the supervised learning portions) of millions of people in order to create a product. This, coupled with some misleading marketing campaigns that is using a tech-apocalypse language in order to inflate a financial bubble. So the punchline is I don’t think using AI to create art or code is immoral. The problem lies in the production, reproduction and accumulation of capital.
That said, I do think in any curriculum that addresses AI, it needs to include all the types of AI and enlighten how AI actually works (as some researchers like Miguel Nicolelis like to say, it’s not artificial nor it is intelligent), opposed to how people are promoting AI like it’s some kind of magical wand to solve all problems. I think it’s better than the main current market approach which is replacing AI knowledge with prompt engineering.
Yeah, that’s the issue, I think. AI became synonymous with generative AI. Pattern recognition, Computer Vision, OCR, Machine learning, Optimization algorithms, Data Mining, clustering, classification, regression… everything went under the rug when genAI came along.
Imagine wanting an educated populace
BEIJING, May 12 (Xinhua) – China will establish a tiered AI education system spanning primary, junior high, and senior high schools to guide students from foundational cognitive awareness to practical technological innovation, according to policy documents unveiled Monday.
At the primary school level, the Ministry of Education (MOE) prioritizes AI literacy through exposure to basic technologies, such as voice recognition and image classification. Building on this foundation, junior high school students will deepen their understanding of AI logic, examine machine learning processes, and develop critical thinking to identify misinformation in generative AI outputs.
Phew… it’s a nuanced approach to AI.
I wish the school I went to as a kid had classes like that! Neural Nets are fascinating to learn about.
Meanwhile here in the US people shrivel up into a screeching ball for just mentioning AI and call everything they don’t like “slop”
A Reddit link was detected in your post. Here are links to the same location on alternative frontends that protect your privacy.







