Article count:11743 Read by:149903751

Account Entry

Jensen Huang's outrageous statement: AGI has been achieved, Ilya is wrong, there are 1 billion programmers.

Latest update time:2026-03-24
    Reads:
Jay reporting from Aofei Temple QbitAI | WeChat Official Account

We have already implemented AGI.

Old Huang's latest outrageous remarks shocked everyone.

But that's not all. In a recent interview with Lex Fridman , Huang revealed even more…

This wasn't just empty talk; Lao Huang explored this issue from technical, social, and humanistic perspectives, clearly indicating that he's been giving it considerable thought recently.

Outrageous statements abound , and the barrage of criticism is in full swing.

Ilya is wrong; pre-training is far from reaching its peak, and synthetic data will further drive the expansion of data scale.

He then went a step further, refuting his peers' underestimation of the " scaling during inference " approach.

Reasoning is by no means a lightweight form of computation. If pre-training is reading, then reasoning is thinking, and thinking is far more difficult than reading.

The most interesting part of the entire interview was undoubtedly Lao Huang's vision for the future based on the first principle that "AGI has been achieved."

  • OpenClaw is the iPhone of the token era. The one you chat with most often might be your lobster.

  • Intelligence will become a commodity that can be priced on demand and circulated on a large scale in the form of tokens.

  • Programmers will not be replaced by AI. Writing code is not actually a core skill, and this group will surge from 30 million to 1 billion.

  • To achieve space travel, I might create a humanoid robot and then upload my consciousness into it in the form of a model.

  • The word "intelligence" has been overly mythologized. Humanity, character, compassion, and generosity are the most valuable qualities.

This is Lao Huang's first time appearing on Lex Fridman's show, and Lex finally seized this opportunity to "dig" into Lao Huang from every angle.

The two-hour conversation covered a wide range of topics, from Nvidia's grand vision for the future data center to Huang's unique management philosophy and way of life... and even briefly touched on philosophy at the end.

Old Huang once again showed his emotional side.

The best outcome is to die at work. Ideally, it should happen instantly, without prolonged suffering.

The full interview transcript is attached below. To ensure readability, QuantumBit has made some adjustments to the content without altering the original meaning.

Lex Fridman in conversation with Jensen Huang

Have scaling laws hit a wall?

Le x Fridm an: Do people still believe in scaling law?

Huang Renxun : I believe it, and the scaling law is even more evident now.

Lex Fridman: Among pre-training, post-training, test time, and agency scaling, what is the bottleneck that worries you the most and even keeps you up at night?

Jensen Huang: In the pre-training scaling law stage, everyone's judgment was actually correct: the total amount of high-quality data will limit the upper limit of AI intelligence. The more data and the larger the model, the smarter the AI ​​will be.

Later, Ilya Sutskever said that the data was used up and pre-training had reached its limit, causing great panic in the industry, which felt that AI was coming to an end. Obviously, this is not the case.

The data will continue to grow, and a large portion of it will be synthetic data .

Most of the data we exchange and transmit knowledge to each other is inherently "synthetic ." It's not directly extracted from nature, but rather created by humans; I digest it, process it, and regenerate it, and then others digest it as well.

AI can now generate data on a large scale based on real-world data. During the post-training phase, the scale of data continues to expand, but the proportion of human-generated data will decrease. The bottleneck in training has shifted from data to computing power.

Then comes the test time.

I remember someone telling me that inference itself is simple; the difficulty lies in pre-training. Inference chips will be small chips, not requiring complex and expensive systems like NVIDIA's. Inference will be a large market, but it will eventually be commercialized, and anyone can do it.

This logic has never held water in my opinion. Because reasoning is thinking, and thinking is difficult, far more difficult than reading .

Pre-training is more like memorization and generalization; it's about reading and reading, finding patterns within relationships. But thinking is about problem-solving, using first principles to try different paths. The essence of test time scaling is reasoning, planning, and searching.

How could such a calculation be lightweight?

Further down the line, after inference and test time scaling, we have created an agency instance.

It has a large language model, but during test times, it will conduct research, query databases, and call tools. More importantly, it will continuously derive child agents.

This is the next scaling law, agentic scaling . Essentially, it's AI multiplication ; you can derive any number of agents.

These agents generate a large amount of data and experience during operation. The high-quality parts are retained and sent back to pre-training for memorization and generalization. After post-training and test-time enhancement, the data is then output to the industry by the agency system.

Lex Fridman: Different components require different hardware to achieve optimal performance, such as hybrid experts and sparsity. You must anticipate the direction of AI development, but hardware cannot be changed in a week.

Jensen Huang: AI model architecture changes roughly every six months, while system architecture and hardware architecture have a cycle of about three years. So you have to predict the direction two or three years from now.

First, we conduct our own research, including both basic and applied research. We train our own models, gaining firsthand experience, which is also part of the collaborative design process.

Secondly, we have in-depth collaborations with almost all AI companies. We can understand the problems they encounter.

Another point is that it needs a sufficiently flexible architecture that can adapt to changes. This is where the value of CUDA lies: on the one hand, its extreme acceleration capabilities, and on the other hand, its high degree of flexibility.

Finding the right balance between specialization and versatility is crucial. Too much specialization makes it unable to adapt to algorithm changes; too much versatility means losing its speedup advantage.

If you compare the Grace Blackwell rack with the Vera Rubin rack a year later, you'll find a huge difference.

Grace Blackwell 's design goal was singular: to handle LLM. The Vera Rubin rack, on the other hand, incorporated storage accelerators, the new Vera CPU, NVLink 72 for running LLM, and a new expansion rack, Rock.

This system is completely different from the previous generation, with many more components. The previous generation was designed for MoE large model inference, while this generation is designed for agents, which need to call tools.

Lex Fridman: The design of this system actually predates products like Claude Code and Codex. Where does this judgment come from?

Jensen Huang: No matter how technology develops, if you consider a large language model as a digital worker, what does it need?

It needs access to the actual data, which is the file system. It needs to do research because it can't possibly know everything. It needs to use tools.

Some people say that AI will make software disappear, but this is completely unfounded.

Ten years from now, the most powerful agent, even a humanoid robot, will come to your home. Will it be more likely to directly use your existing tools, or will it turn its hands into a hammer, a scalpel, or even use its fingers to emit microwaves to boil water?

It's clearly the former. It will use your microwave. Don't worry if you don't know how the first time; it can connect to the internet, read the manual, and you'll get the hang of it quickly. What I just described is actually the core capability of OpenClaw.

OpenClaw is to agenetic systems what ChatGPT is to generative AI.

Lex Fridman: You just talked a lot about what was considered a bottleneck in the past but was later overcome. What will be the next bottleneck?

Jensen Huang: Electricity is a problem.

Over the past decade, Moore's Law has brought about a roughly 100-fold increase in computing power, while we have achieved a million-fold increase through scaling.

Next, we will continue to rely on the ultimate collaborative design to perpetuate this trend.

Energy efficiency directly determines a company's revenue and a factory's output capacity. We will push energy efficiency to its limit and reduce token costs as quickly as possible.

Although our hardware prices are rising, token generation efficiency is improving even faster , so token costs are continuously decreasing, basically by an order of magnitude every year.

Lex Fridman: Do supply chain bottlenecks keep you up at night? For example, ASML's EUV lithography machines, TSMC's CoWoS packaging, and SK Hynix's high-bandwidth memory?

Huang Renxun: Historically, almost no company of our size has been able to grow at this rate, and it continues to accelerate. Therefore, the entire upstream and downstream supply chain is extremely crucial to us.

I spent a lot of time communicating with the CEOs of our partners about one thing: what exactly is driving this growth, and why is it still accelerating?

I will tell them about the current business situation, recent growth drivers, what's happening, and where we're headed next. They will then use this information to determine their investment direction.

Of course, I will also visit them in person to explain what will happen this quarter, next year, and the year after.

Lex Frid man: But interestingly, you don't seem to be "sleeping" because of the supply chain.

Huang Renxun: Because I've been doing everything I was supposed to do. I've analyzed each of these issues one by one.

From the earliest DGX-1 to today's rack-mount computing with NVLink-72, the system architecture has completely changed. I will analyze what this means for software, for engineering, for design, testing, and the supply chain.

Data Centers and Energy

Le x Frid Man: How should the energy problem be solved?

Huang Renxun: The current power grid is designed according to the most extreme conditions and will reserve redundancy.

But the reality is that 99% of the time, we fall far short of that peak. Truly extreme situations only occur on a very few days of the year, such as extreme weather events in winter or summer.

Most of the time, our electricity consumption is only about 60% of the peak level.

In other words, 99% of the time, the power grid actually has a large amount of idle electricity .

So I was thinking, is it possible to enable data centers to proactively relinquish some power when the power grid needs to be at full capacity, through better understanding, contract design, and computer architecture design?

During that period, we can use a backup generator , move the workload elsewhere, or even slow down the computer. For example, we can slightly reduce performance and power consumption to make the response time a little longer.

We should rethink how data centers are designed .

The current goal is 100% online operation, and contract requirements are very stringent, which puts a lot of pressure on the power grid.

But in fact, the power grid does not need to be expanded to a higher peak capacity; we only need to make use of the idle electricity.

Lex Fridman: So what's the obstacle? Is it regulation, or is it a procedural issue?

Huang Renxun: This is a three-way issue.

First, there are the end customers . Their requirement for data centers is that they must never be unavailable—in other words, absolutely perfect. To achieve this perfection, you need backup generators and a near-perfect power grid stability.

So the first step is to make customers, especially CEOs, aware of what they are asking for.

Often, there's a disconnect between the person signing the contract and the CEO . The CEO might have no idea what's written in the contract. But during negotiations, both sides strive for the best possible terms. As a result, cloud service providers are forced to demand the same level of protection from power companies.

The second point is the design of the data center itself .

What we need is a system that can gracefully degrade. When the power grid tells us that it can only provide 80% of the power, we can migrate the workload, ensuring no data loss, while reducing computing speed and energy consumption.

Service quality may decrease slightly, but mission-critical tasks can be immediately migrated to other data centers to ensure no impact.

The third point is about the power companies . Currently, the power companies say that expanding the power grid will take five years . However, if they could offer different tiers of power supply commitments, they could actually provide power much faster.

Lex Fridman: You previously highly praised the speed at which Musk and xAI built the Colossus supercomputer in Memphis. It was built in just four months and now has 200,000 GPUs, and it is still growing rapidly.

Are there any lessons to be learned from his approach that could be offered to the entire data center industry?

Jensen Huang: Elon has a deep understanding of many fields. He can switch back and forth between multiple disciplines at the same time, and he will constantly question everything.

Is this really necessary? Does it have to be done this way? Does it really need to take this long?

He can keep asking questions, compressing everything down to the bare minimum, making sure nothing is deleted, but the functionality remains complete.

His style can be described as extremely minimalist, and he achieves this at the system level.

Another thing I really admire about him is that he's always on the front lines. Wherever there's a problem, he goes straight there and says, "Show me the problem."

Finally, his sense of urgency is personally transmitted. When he himself acts with an extremely high sense of urgency, the entire system is mobilized.

Every supplier has many clients and many projects, but he will make himself the highest priority project for everyone.

Le x Fridman : I remember once when I was with him, he would even study how to plug cables into the rack. He would go directly with the field engineers to see where the process was prone to errors.

Is there anything in common between this approach and NVIDIA's ultimate collaborative design?

Jensen Huang: Collaborative design is essentially an extreme systems engineering problem.

All our work is based on this first principle.

In addition, we have another concept that I started using 30 years ago, called "light speed".

The speed of light is not just about speed; it's a concept I use to express the limits of physics. I will insist that everyone start from first principles, understand the limits of physics, and then begin designing.

I don't really like the so-called continuous optimization .

Optimizing gradually is certainly fine. But I don't like it when someone says from the beginning, "It currently takes 74 days, but we can help you optimize it to 72 days."

First, tell me why it's 74 days?

If we were to redesign it from scratch today, how long would it theoretically take?

Often, you'll find that the answer might be 6 days .

The remaining 68 days may be filled with various historical burdens, cost trade-offs, and procedural complexities.

Lex Fridman: When you deal with such complex systems, is "simplicity" an important principle?

A single NVL72 rack contains 1.3 million components and 1,300 chips. You also need to produce about 200 of these pods per week.

At this scale, simplicity is virtually impossible.

Jensen Huang: My most frequent saying is: The complexity should be just enough, but it must be as simple as possible.

The key question is, is this complexity necessary?

If it's unnecessary, then it's just redundant complexity.

Lex Fridman: Over the past decade, China's rise in the technology sector has been astonishing, giving rise to a large number of world-class companies and engineering teams. Why has China been able to achieve this?

Jensen Huang: About half of the world’s AI researchers are Chinese, and most of them are still in China.

China's technology industry emerged at a very crucial juncture, namely the era of mobile internet and cloud computing.

The core of that era was software, and that's precisely where China's strength lies.

They have a large number of young people with very solid scientific and mathematical foundations, a strong education system, and this generation grew up in the software age and is very familiar with modern software.

In addition, China is not a single economy, but is composed of many provinces and cities, with fierce competition between them.

This is why you see so many new energy vehicle companies, AI companies, and almost every industry you can think of, with numerous companies operating simultaneously. And ultimately, those that remain are often the very strong ones.

There is also a cultural factor.

Their ranking is roughly: Family first, friends second, company third .

This has led to very frequent information exchange between people. To some extent, they have always been in a "quasi-open source" state .

You'll find that the relationships between engineers are intertwined; friends work at other companies, relatives work at other companies, and many are even classmates.

For them, the concept of "classmate" represents a lifelong relationship.

In this context, knowledge spreads extremely quickly. Since true secrecy is difficult to maintain, the solution is to open-source it. The open-source community, in turn, further amplifies the speed of innovation.

Lex Fridman: And from a cultural perspective, being an engineer in China is a very cool thing.

Jensen Huang: That's right, this is a builder nation.

Our country's leaders are very capable, but many of them are lawyers by training because we place more emphasis on rules and systems.

They rose from poverty, and many of their leaders are engineers, and very good ones at that.

Lex Friedman : TSMC is also a legendary company. What do you think is the reason it has achieved this level of success?

Jensen Huang: Many people have a misunderstanding about TSMC. They think that TSMC's core is just technology, such as transistors, packaging, and photonics.

More importantly, they can coordinate the ever-changing needs of hundreds of companies worldwide . Customers are constantly changing; some are expanding production, some are reducing production, some are placing urgent orders, and some are canceling orders.

In this highly dynamic environment, they are still able to maintain high production capacity, high yield, and low cost, while providing excellent customer service.

They take their commitments very seriously . They always deliver on schedule, which is crucial for their customers.

Secondly, there's their culture. On one hand, they're extremely technology-driven, constantly pushing the boundaries of technology. On the other hand, they place immense importance on customer service. Many companies can only achieve one of these, but they've achieved world-class levels in both.

Third, there is an intangible ability called trust .

Lex Fridman: This trust comes from both long-term performance and interpersonal relationships.

Jensen Huang: We have been working together for 30 years and have done business worth billions or even tens of billions of dollars, but we have no contract .

Lex Fridman: There's another story that the founder of TSMC invited you to be the CEO in 2013, but you declined. Is that true?

Jensen Huang: It's true. I didn't take this opportunity lightly; I'm very honored. TSMC is one of the most important companies in history , and Morris Chang is someone I greatly respect and a friend.

But I was also very clear at the time that what Nvidia was doing was equally important. I could already see in my mind what it would become and the impact it might have. It was my responsibility, and I had to make it happen. So I declined.

It's not because the opportunity isn't good enough, it's because I can't leave .

Lex Fridman: How will CUDA's installed base evolve into a moat in the AI ​​era?

Jensen Huang: In the past, for us, the computing unit was the GPU. Later, it became a single computer. Then it became a cluster. Now, it has become an entire AI factory .

Previously, when I thought about Nvidia's products, I pictured chips. But today, holding up a chip is still quite endearing , though it's no longer the central image in my mind.

The image that comes to mind right now is a massive, gigawatt-scale infrastructure. It's connected to the power generation system, the power grid, and has a huge cooling system and a giant network. There are tens of thousands of people installing it, hundreds of network engineers on-site, and tens of thousands more working behind the scenes to bring it online.

Starting up a factory like this isn't something one person does by pressing a switch and saying, "It's powered on now." We need thousands of people working together to light it up .

Lex Fridman: So your understanding of a single computational unit has changed.

Jensen Huang: Yes, I'm thinking about the entire infrastructure. And I hope that next time I jump, it will become a planetary-level system .

Lex Fridman: What do you think of the direction Elon mentioned, moving computing into space to alleviate the energy expansion problem?

Jensen Huang: Actually, space is particularly suitable for many imaging missions because the high-resolution imaging systems on satellites continuously scan the Earth.

But if you want centimeter-level resolution and continuous global coverage, you will essentially get near real-time telemetry data.

This data volume is too large to be transmitted all back to Earth . You must perform AI processing directly on the edge, that is, on the satellite. Discard any unchanged, previously seen, or valueless content, and only keep the truly necessary parts.

But there's no conduction or convection in space. Heat dissipation is basically limited to radiation. We 'd probably have to put a very large heat sink there .

Lex F rid man: Is this something that will happen in five years, ten years, or twenty years?

Huang Renxun: I still prefer pragmatism.

But at the same time, I will continue to cultivate the space program. So I will send engineers to study this issue, and a lot of engineering exploration can be done in the early stages.

But before that, there was already so much idle electricity on Earth, and I wanted to make use of it as soon as possible.

Lex Fridman: Do you think Nvidia could reach a market capitalization of $10 trillion in the future?

Jensen Huang: We are the largest computer company in history. This in itself is worth asking, why?

The first reason is that computers have transformed from retrieval systems into generation systems .

Past computing was essentially document retrieval. Almost everything was a file. We would first write the content and store it in a file. Then, a recommendation system would retrieve the content for you.

In the old world, computation involved humans recording data first, followed by the system retrieving it. Now, AI computers are context-aware. They process and generate tokens in real time.

Therefore, we have moved from retrieval-based computing to generative computing. This new world requires far more computing power than the old world. The old world required massive storage, while the new world requires massive computation.

The second reason is that the role of computers has changed .

In the past, it was more like a warehouse. Now we're building a factory. Warehouses don't generate much direct revenue, while factories directly contribute to the company's income.

We are now beginning to see that the products manufactured in this factory are actually being consumed by people, and are of high value.

These items are tokens. Tokens are starting to tier up, just like the iPhone . There are free tokens, premium tokens, and mid-tier tokens.

Ultimately, intelligence will become a product with tiered pricing . High-intelligence tokens, used in more specialized scenarios, will command higher prices. One million tokens worth $1000 —this, in my view, is not far off.

The next questions are: How many of these factories does the world need? How many tokens does the world need? How much is society willing to pay for these tokens? If productivity increases significantly because of them, what will the economy look like?

Putting all of this together, I'm almost certain that global GDP growth will accelerate further. The percentage used for calculations will be 100 times higher than before .

I remember when Nvidia first broke the $1 billion revenue mark, a CEO told me that fabless semiconductor companies could theoretically not exceed $1 billion.

Later, some people said that we would never exceed $25 billion because a certain company would restrict us. I've heard similar things many times. These judgments are not based on first principles.

Nvidia has never survived by grabbing market share. Many of the markets I just mentioned didn't even exist before. We're not taking over an existing market; we're creating new markets .

It's hard for people to imagine how big we'll eventually be. Because there's no ready-made object that allows me to say how much share I'm taking from whom.

Lex Fridman: That's an interesting perspective. In a sense, it's a token factory.

Jensen Huang: And what really excites me is that the iPhone moment has arrived.

Lex Fridman: Are you saying OpenClaw is a tokenized iPhone?

Jensen Huang: More broadly speaking, it's about the agent as a whole. OpenClaw is the fastest-growing application in history, almost a vertical leap. Without a doubt, OpenClaw is the iPhone of tokens.

Lex Fridman: Starting last December, something really special seemed to happen. Everyone suddenly realized the power of Claude Code, Codex, and OpenClaw.

I'm even a little embarrassed to admit that on my way here today, at the airport, I did something like this for the first time in public: I was talking to my computer and coding at the same time.

I don't know how to view such a future, where everyone is talking to AI on the street.

Jensen Huang: And more likely, your AI will keep bothering you. Because it works so fast. It will keep coming back to report, "I'm done, what do you want me to do next?"

In the future, the thing that will chat with you and send you messages the most often might be your lobster.

Huang Renxun's Management Philosophy

Lex Fridman: Nvidia is now involved in completely different disciplines, each with world-class experts. How did you bring these people together?

Jensen Huang: Designing a computer requires an operating system; designing a company is essentially the same. You need to figure out what the company will ultimately produce.

I've seen many company organizational charts—hamburger-shaped, flat, car company-shaped—they all look pretty much the same.

But a company should be a machine, a system, and its structure must reflect the environment in which it operates.

Currently, there are about 60 people reporting directly to me. It's simply not practical for me to avoid one-on-one communication. I prefer to gather all 60 people together for a meeting, raise the issues, and work together to solve them.

Temporary lapses in concentration are acceptable, but they know when they must focus. If someone could offer an opinion but doesn't, I will call them out directly.

Lex Fridman: How do you make groundbreaking judgments when faced with critical moments requiring choices?

Jensen Huang: It's mainly driven by curiosity. At a certain point, a whole set of reasoning becomes very clear, making me believe that this will definitely happen.

Once you've determined this, you start building a future. Then you work out the path, reasoning why it must exist. The management team is involved, and we spend a significant amount of time on this process.

Many leaders keep these ideas to themselves, waiting until one day to suddenly announce: a new plan, a new organization, a new mission. I never do that.

When an idea begins to influence my judgment, I will immediately let those around me know . I will continuously share new information, new insights, and new engineering progress, using these to shape everyone's understanding.

Many times, I already have the answer in my heart, but I will continue to convey this logic little by little through external events and internal developments.

I do this every day, with the board, management team, and employees. So when I announced the acquisition of Mellanox, everyone thought it was a natural progression.

When I decided to go all in on deep learning, all departments had already laid the groundwork in this direction, and most of the logic had been accepted.

I love that feeling when, when I announce something, my employees think: Why are you only telling me now?!

This is the goal of leadership: to get everyone moving in the same direction. Otherwise, when you announce a major decision, people will only feel confused.

If you look back at each GTC keynote address, you'll see that it shapes the perception of the entire industry, and in turn, strengthens the perception within companies .

Lex Fridman: You attribute a lot of your success to one thing: that you can endure more hardship and pain than others.

As the CEO of Nvidia, the entire economy and many countries will make strategic judgments, allocate funds, and plan AI infrastructure around you.

How do you deal with these pressures?

Jensen Huang: I will keep reasoning about what we are doing, what impact this will have, whether it is a help or a burden to others, such as whether it will put a lot of pressure on the supply chain.

The next question is, what are you going to do?

When faced with almost any emotion, I first break it down. What is my current situation? What changes have occurred? What are the difficulties? What do I need to do next?

Then only one question remains : Did you do it or not?

If you've already determined that something needs to be done, but you haven't done it yourself or asked someone else to do it, then stop crying over it.

I was able to fall asleep because I had already made a list of things to do. I had already told others about anything I felt might harm the company, our partners, or the industry.

And this person is someone who has the ability to take action.

Another part is actually forgotten .

A crucial ability in AI learning is systematic forgetting. You need to know when to forget things. You can't memorize everything indefinitely.

Many times, you just have to be tough on yourself. That's about it , stop crying, get up and get to work!

I think many top athletes are like that. They only focus on the next point. Embarrassment or setbacks are all in the past.

Lex, you do a lot of your work in public. So do I.

I often say things in public that I think make sense or are funny at the time, or at least I find them interesting.

Looking back now... it doesn't seem that interesting after all.

Lex Fridman: You once said something famous, something like, if you had known back then that creating Nvidia would be this difficult, a million times more difficult than you imagined, you probably wouldn't have done it at all.

But when I hear that, I think that almost everything that is truly worth doing is like that, right?

Huang Renxun: Absolutely right.

What I really want to express is that people should retain a childlike mindset.

When I see something, my first reaction is almost always, " How difficult can it be?"

No one has ever done this before; it seems incredibly massive, costing hundreds of billions of dollars and fraught with countless difficulties. But you'd still say—

How difficult could it be?

You can't simulate all the setbacks, failures, disappointments, and humiliations beforehand. You should approach an experience with a fresh mindset, thinking that it will be great, interesting, and exciting.

Once you're truly immersed in it, you'll need resilience and endurance. Because setbacks are inevitable, and they'll still come in unexpected ways.

Disappointment will surprise you, embarrassment will surprise you, humiliation will surprise you. But you can't let them hold you back.

At this point, you need to activate another mechanism, forget about it, and move forward .

As long as my fundamental judgments about the future remain unchanged, and as long as the input conditions do not undergo substantial changes, I believe the outcome will not change either.

I've always been curious and always learning. I'm always observing others.

Because I remain humble about many things, I always think, "They did a really good job, how did they come up with that idea, how did they think of it?"

In some ways, I've always been imitating others.

Lex Fridman: You are now one of the richest and most successful people on earth. In that situation, wouldn't it be difficult to remain humble?

Huang Renxun: It's strange, but actually there isn't.

It could even be the exact opposite. Because much of my work is done publicly, almost everyone will see if I make a mistake.

When I make a mistake, or when things don't go as planned, others can see it.

But in internal meetings, I often reason and speak simultaneously. In those situations, things could certainly go in different directions.

But that never stops me from continuing to reason. My management and leadership style involves constantly reasoning in front of others. Even now, as I'm talking to you, you can tell that I'm actually reasoning on the spot .

I hope you understand that what I'm saying doesn't mean you have to believe me just because I said it. I will show you how I arrived at this conclusion step by step. That way, you can judge for yourself whether you believe my final conclusion.

I do this every day in meetings.

I would say, "Let me tell you how I see this." Then I would explain my reasoning process.

This way, everyone will have the opportunity to stop me at any time and say: I don't agree with your move.

The best thing about this approach is that others don't need to directly oppose your conclusion. They can oppose only a specific step in your reasoning. Then they pull me in another direction, and we continue pushing forward together.

Lex Fridman: After so many years of tremendous success and pain, it's truly remarkable that you can maintain this level of composure. Sometimes pain can make a person withdrawn. But you haven't.

Jensen Huang: The ability to tolerate embarrassment is really important.

But you know, my first job was cleaning toilets .

Lex Fridman: I told a friend a few days ago that I wanted to interview you, and his first reaction was, oh, their gaming graphics cards are really powerful!

These hardware devices have indeed brought joy to many people, truly illuminating those virtual worlds. However, DLSS 5 also sparked some controversy before. Could you talk about the controversies surrounding it?

Jensen Huang: I understand, because I myself don't like that blurry AI feel .

Many AI-generated contents are indeed becoming more and more similar; although they are all beautiful, they are also becoming increasingly homogenized.

But that's not what DLSS 5 is trying to do at all.

I demonstrated many examples at the time. DLSS 5 is based on 3D conditional control and is also guided by real geometry.

In other words, the geometry is defined by the artist, and we are completely faithful to this geometric information, without changing it for any frame. It is also subject to the constraints of textures and artistic style.

Therefore, it enhances, rather than tampers with, each frame .

Of course, the question also lies in how exactly this enhancement should be implemented.

DLSS 5 is an open system, so developers can also train their own models to determine styles. In the future, it may even be possible to provide direct hints.

For example, if I want a cartoon rendering style, or a certain visual effect, you can even provide a reference sample.

Then it will be generated in that style, while maintaining consistency with the original artistic style and artistic intent.

So all of this is actually for artists, to help them make their works more beautiful, while still maintaining their own style.

I think many players misunderstand that the game was already finished, and after its release, we use AI to post-process and change the graphics.

DLSS was not designed that way.

Essentially, it's giving artists an AI tool, a generative AI tool.

They can choose not to use it.

Lex Fridman: People are becoming more sensitive to the blurriness of AI. This is actually like a mirror, making us realize that what we really want is often some kind of imperfection.

It, in turn, helps us understand why we are moved by certain parts of the world.

Jensen Huang: Yes, AI is just another tool.

Furthermore, if developers want the generative model to do something completely contrary to the realism of a photograph, it can do that too.

Over the past few years, we've introduced skin shaders to game developers. Many skin effects in games now use subsurface scattering, making them look more like real skin.

The industry has always been looking for more tools to express art. This is just another tool. Ultimately, whether or not to use it is up to the developers .

AGI and Consciousness

Lex Fridman: Let's assume AGI, which is defined as an AI capable of founding a company with a market capitalization or value exceeding $1 billion. How far are we from that goal?

Jensen Huang: I think it's right now. I think we've already achieved AGI.

You said $1 billion, but you didn't say it was going to last forever.

Therefore, it's entirely possible that Claude could create a web service or a very interesting little application, which would suddenly attract billions of users, each paying 50 cents, quickly making a lot of money, only to die shortly afterward.

We've seen many companies like this in the internet age. And many of those websites weren't actually more complex than what OpenClaw can generate today.

Lex Fridman: So, you mean I have the opportunity to make a lot of money just by releasing an agent?

Huang Renxun: It's already happening now .

If you go to China, you'll see that many people are already teaching their Claude how to find work, take on jobs, and make money.

I wouldn't be surprised at all if some kind of social product, or a super cute digital influencer, or some kind of app that takes care of your virtual pet suddenly appeared and became inexplicably popular.

Many people use it for a few months, and then it slowly disappears.

Many people are genuinely worried about their jobs these days. But I want to remind everyone that the purpose of your work and the tasks and tools you use to complete that work are related, but not the same thing .

I've been doing this job for 34 years . And in those 34 years, the tools I use to do this job have been constantly changing.

Back then, one of the first professions that computer scientists and AI researchers said would be replaced by AI was radiologists.

They believe that once computer vision reaches superhuman levels, AI will take over the analysis of radiological images.

The technical assessment is actually correct. Computer vision has indeed reached a superhuman level.

But what happened? They made a wrong judgment.

Today, almost all radiology imaging platforms and software packages are powered by AI. Yet, the number of radiologists has actually increased. In fact, there is a global shortage of radiologists.

Why is this happening?

First, the alarmist claims made at the time went too far , even scaring away some people who would otherwise have entered the industry.

Second, their mistake was that they treated the task as their profession .

The true responsibility of radiologists is to help diagnose diseases, help patients, and help clinicians.

Now, because the viewing speed is faster, you can see more images, make more accurate diagnoses, treat hospitalized patients faster, and serve more patients.

Hospitals are earning more money and have a greater capacity to treat patients, so they need more radiologists.

Nvidia's software engineer workforce will continue to grow , not decline.

The reason is the same. A software engineer's responsibility is to solve problems. Writing code is just one part of that task. I never care how many lines of code my engineers write.

What I care about is whether they solved the problem.

Lex Fridman: You mean the total number of programmers might increase in the future?

Jensen Huang: The key is how you define programming .

In my view, programming today has essentially become a specification. That is, telling the computer clearly what you are going to build.

So the question is, how many people are capable of doing this?

I think this number may have expanded from 30 million to 1 billion .

In the future, every carpenter will be a programmer. And a carpenter with AI will also become an architect. The value he can provide to clients will be greatly enhanced, and his creative abilities will be significantly amplified.

I also believe that every accountant can be more like a financial analyst and a financial advisor at the same time.

Many professions will see an overall increase in income as a result .

Lex Fridman: And today's programmers and software engineers are actually at the forefront. They intuitively understand how to use natural language and agents to communicate in order to design better software. The two sides will gradually converge.

But I still think that learning programming in the traditional sense is still valuable.

Jensen Huang: Yes. The reason is that specification itself has levels and an art. How you define it depends on what problem you want to solve.

For example, when I'm developing a strategy for the company, clarifying its direction, and deciding what we should do, I'll explain it in sufficient detail so everyone understands the direction and knows how to begin. But I 'll also deliberately leave some room for ambiguity .

In this way, 43,000 very talented people have the opportunity to do it even better than I originally thought.

So when I work with engineers or teams, I think clearly about what problem I'm solving and who I'm working with.

The level of detail required in the specification is directly related to these conditions.

Everyone needs to learn where they want to stand on the spectrum of programming.

Sometimes you want something more specific and defined because you're looking for a very concrete result.

Sometimes you might want to be more open-ended because you want to explore, you want to engage in a back-and-forth struggle with AI, and let it push the boundaries of your own creativity a little further.

I believe this ability to strike a balance between different levels of specificity is the true art of programming in the future.

Lex Fridman: But even outside of programming, many people are indeed very anxious right now, especially white-collar workers.

Whenever automation and new technologies arrive, society always experiences a period of upheaval, and we don't really know how to deal with it.

I think the first thing is that we need to have compassion and a sense of responsibility to truly feel the pain that those who have lost their jobs and their families are enduring.

Technological changes of this magnitude, like AI, are bound to bring a lot of pain. And to be honest, I don't know how to deal with that pain.

I just hope that it will ultimately provide more opportunities for these people, allowing them to continue doing similar work, only with different tools that are more efficient and more interesting. Just like how programming has changed.

I have to say, I'm really enjoying coding now, more so than ever before. I hope AI can automate the tedious parts and leave the truly creative parts to humans.

Even so, there will still be a lot of pain and struggle in the process.

Huang Renxun: I feel anxious about the future, I feel anxious about pressure, and I feel anxious about uncertainty.

My first step is always to break the problem down. Then I tell myself that there are things I can do about some things, and things I just can't.

For anything you can do something about, we'll reason it out carefully and then act on it immediately .

If I were to hire a recent graduate today, I would have two candidates in front of me.

One person knows absolutely nothing about AI, the other is extremely skilled at using AI. I will definitely hire the one who is skilled at using AI.

Therefore, my suggestion is that every college student and every teacher should encourage students to learn how to use AI as soon as possible.

By the time every college student graduates, they should already be an AI expert. Whether you're a carpenter, an electrician, or someone in another profession, you should all use AI.

Of course, this technology will inevitably lead to job displacement. If your job is essentially a single task, you will likely be significantly impacted.

If the true value of your work lies in you as a person, but some of the tasks can be automated, then you should immediately learn to use AI to automate those tasks .

Lex Fridman: Is there something in human consciousness that is fundamentally non-computational? That no matter how powerful a chip is, it can never be replicated?

Jensen Huang: I don't know if chips will one day feel "nervous".

I believe AI can recognize and understand the conditions that cause anxiety, tension, or other emotions. But I don't think my chip will actually sense them.

We need to break down the word "intelligence" to understand it.

We talk about intelligence every day, but it's not mysterious. Intelligence is a system that includes the abilities of perception, understanding, reasoning, and planning.

This is actually a functional concept, not a word synonymous with humanity. I won't indulge in too many romantic fantasies about intelligence; I even think it will become a commodity.

I'm surrounded by intelligent people. They're better educated than me, went to better schools, and are more knowledgeable in their respective fields. I have 60 such people around me. To me, they're all like superheroes.

But I was the one sitting in the middle, coordinating these 60 people.

How can someone who used to wash dishes sit among a group of superhumans and organize them?

Intelligence is functional. Humanity, on the other hand, is not defined by function; it is a much broader term.

Our life experiences, our capacity to endure pain, our resolve—these are not the same concept as intelligence.

The word "intelligence" has been overhyped in the past.

Lex Fridman: What should really be elevated is humanity.

Huang Renxun: Character, humanity, compassion, generosity... these are the true superhuman strengths.

And intelligence will be commercialized next.

In the past, people always said that education was the most important thing. But even if you acquire a lot of knowledge in school, what school gives you is never just knowledge.

Unfortunately, our society has long compressed too much into a single word.

But life is never just a single word.

My own life proves this point. Even if my intelligence curve is lower than many people around me, it doesn't prevent me from becoming the most successful one.

Don't start to feel anxious just because intelligence has been commodified.

You should be motivated by this.

Lex Fridman : Nvidia's success, and the lives of the millions of people I just mentioned, are largely related to you.

But you are ultimately just a person, and like all of us, you will die.

Are you afraid of death?

Huang Renxun: I really don't want to die.

I have a great life, a wonderful family, and a very important job.

What I'm experiencing now isn't a once-in-a-lifetime experience. Once-in-a-lifetime means that many people have experienced it, but each person only experiences it once.

What I'm experiencing is more like an experience on a historical scale. Nvidia is one of the most influential technology companies in history.

So of course, there are some very real issues, such as successors .

I don't believe in traditional succession planning. It's not that I think I won't die. The reason is that if you're really worried about succession, then your most important task today is to continuously pass on your knowledge, information, insights, skills, and experience.

That's why I constantly reason things out in front of the team.

Every minute I spend inside and outside the company is spent sharing what I know with others as quickly as possible. Any new knowledge I learn never stays on my desk for more than a few seconds .

"This is so interesting, you should go try it right away, you definitely have to learn this."

Even before I had fully understood it myself, I had already pushed it to others.

So I have been sharing knowledge, empowering others, and improving the abilities of everyone around me.

The best outcome is to die at work. Ideally, it should happen instantly , without prolonged suffering.

Lex Fridman: You've been thinking about the future. So, in closing, I'd like to ask:

What gives you hope for all of this—for humanity and for the future of humanity?

Jensen Huang: I have always had strong confidence in human kindness, generosity, compassion, and human capabilities themselves.

Sometimes I get taken advantage of because of this, but it never changes my starting point. I always believe that people want to do good deeds and help others.

And most of the time, I'm right. Many times, the results are even better than I expected.

We have so many problems to solve, so many things to build, and now these things are within reach, and there is even a chance to achieve them in my lifetime.

How could you not feel romantic in the face of such a situation ?

Diseases have been eradicated, pollution has been drastically reduced, and light-speed travel is now something we can all talk about.

Of course, light-speed travel isn't for long distances, but it might be possible for short distances.

I might send a humanoid robot onto the spaceship, a humanoid robot designed based on my appearance .

Much of my life is already on the internet. Take my emails, everything I've done, everything I've said. These things will slowly become my AI .

At that time, all I need to do is send this part of the content at the speed of light, catch up with that robot, and then upload my consciousness.

Podcast link: https://www.youtube.com/watch?v=vif8NQcjVf0

—Welcome AI product practitioners to join us in building this platform—


The " AI Product Knowledge Base" is a Lark knowledge base launched by QuantumBit Think Tank based on long-term product database tracking and user behavior data. It aims to become a core information hub and decision support platform for AI industry practitioners, investors, and researchers.


Follow with one click and star our account.

Daily updates on cutting-edge scientific and technological advancements


Latest articles about

EEWorld
subscription
account

EEWorld
service
account

Automotive
development
community

Robot
development
community

About Us Customer Service Contact Information Datasheet Sitemap LatestNews

Room 1530, Zhongguancun MOOC Times Building,Block B, 18 Zhongguancun Street, Haidian District,Beijing, China Tel:(010)82350740 Postcode:100190

Copyright © 2005-2026 EEWORLD.com.cn, Inc. All rights reserved 京ICP证060456号 京ICP备10001474号-1 电信业务审批[2006]字第258号函 京公网安备 11010802033920号