All Episodes

July 10, 2025 24 mins

Send us a text

The race toward more powerful AI carries a hidden cost that's becoming impossible to ignore: skyrocketing energy consumption. Did you know AI is projected to devour 10% of global electricity by 2030? This staggering figure has even forced tech giants to delay their sustainability goals.

Enter ASE's Executive Vice President Yin Chang, who reveals how the world's largest semiconductor packaging company is tackling this challenge head-on. The solution lies in revolutionary approaches to power delivery and data transmission. By integrating voltage regulators directly into substrates, power can be delivered mere nanometers from compute chips, drastically reducing energy loss. Even more promising is the shift from electrons to photons for data transmission, which slashes power consumption by an impressive 6x.

At the heart of these innovations is ASE's VI-PACK platform, a comprehensive toolbox that empowers system architects to create maximally efficient AI systems. Moving compute components closer together minimizes power requirements, while co-packaged optics enable the crucial electron-to-photon conversion for longer-distance communication. These technologies aren't just theoretical—industry leaders like NVIDIA and AMD are already implementing them, with significant efficiency improvements expected within five years.

The conversation extends beyond data centers to the future of AI at the edge. As foundry processes advance toward smaller nodes, the voltage requirements decrease, making AI more viable for battery-powered devices. Chang envisions a near future where personal devices run limited AI models locally, offering enhanced privacy by processing sensitive data without cloud dependencies.

Discover how advanced packaging is becoming the unsung hero in balancing our appetite for AI innovation with the planet's energy limitations. Follow ASE Global on LinkedIn or visit aseglobal.com to learn more about their pioneering work in sustainable semiconductor solutions.


Learn more at aseglobal.com or follow ASE Global on LinkedIn.


Support the show

Become a sustaining member!

Like what you hear? Follow us on LinkedIn and Twitter

Interested in reaching a qualified audience of microelectronics industry decision-makers? Invest in host-read advertisements, and promote your company in upcoming episodes. Contact Françoise von Trapp to learn more.

Interested in becoming a sponsor of the 3D InCites Podcast? Check out our 2024 Media Kit. Learn more about the 3D InCites Community and how you can become more involved.

Mark as Played
Transcript

Episode Transcript

Available transcripts are automatically generated. Complete accuracy is not guaranteed.
Francoise von Trapp (00:00):
All around us.
Digital transformation ischanging the way people live,
work, play and communicate.
As a leading provider ofsemiconductor packaging and test
services, ase plays asignificant role in the
development of the world's mostinnovative electronics.
Their technologies enablecustomers to create cutting-edge
products that deliver superiorperformance, power, speed and

(00:20):
connectivity.
Learn more at asCglobalcom.
Hi there, I'm Francoise vonTrapp, and this is the 3D
Insights Podcast.

(00:43):
Hi everyone, you know, one ofthe things that keeps me up at
night is how the rapid growth ofAI technology is impacting our
energy grid.
In fact, did you know that by2030, ai is expected to consume
10% of the world's electricity?
Now, luckily for us, theadvanced packaging community is
really hot on the case ofsolving these challenges.

(01:05):
Us, the advanced packagingcommunity, is really hot on the
case of solving these challenges, and here to talk to me about
this is ASE's Executive.

Yin Chang, ASE (01:12):
Vice President Yen Chang, Welcome to the
podcast Yen.

Francoise von Trapp (01:14):
Oh, thank you.
So you know, ASE is the world'slargest outsourced
semiconductor assembly and testservice provider, otherwise
known as OSOT, correct, yes?
So if anybody can solve this,you guys can.

Yin Chang, ASE (01:28):
Well, we definitely are part of the
community trying to understandhow to improve the overall power
efficiency and there's multipleways to improve power
efficiency.
But the simplest way is tryingto move as close to the silicon
as possible.
So one way is trying to createthis voltage regulator modules
that we put directly into thesubstrate so we can deliver

(01:50):
power as close to the computeSOC as we can.
So that will improve theoverall power efficiency, so we
can get more compute with theleast amount of power possible.
So that will help us resolvesome of the AI power hunriness.
Second is trying just movingfrom electron to photons, which
is a lot of emphasis these daysin our space.

(02:13):
So AAC is working hard to putco-packaged optic directly on
the package and by simply doingthat from the today's pluggable
silicon photonic modules we'reable to reduce the overall power
consumption by 6x from 30picajoules per bit down to less
than 5 picajoules per bit.

(02:33):
So hopefully through that wecan slow down the power
requirement as AI computecontinues to increase.

Francoise von Trapp (02:42):
Okay, so let's back up a little bit and
talk about what's causing this.
I mean, we know we've had arapid, rapid adoption of AI
technology.
Did we expect that it wouldhave the impact on the power
grid that it's having?

Yin Chang, ASE (02:57):
No, I think the past few years what he taught
us is that, as the algorithm forAI model continue to evolve and
accelerate, the compute thatrequired to train and post-train
and do inference for thosemodels are far exceeding our
ability to innovate compute.
So what we end up doing is justputting more and more compute

(03:19):
and, as you put in more and morecompute that draw even more and
more power.
So those are the challengesthat we face, because the only
thing we can know how to dotoday is building faster
processors, putting moreprocessors inside the data
center, but in essence, thatbasically creates the power
problem that you describedearlier.
So it's a bit unanticipated,because it's just.

(03:40):
Acceleration of AI was muchfaster than we all anticipated
because it's just.

Francoise von Trap (03:45):
Acceleration of AI was much faster than we
all anticipated.
Okay so, we've seen thatcompanies like Apple and, I
think, meta have pushed outtheir net zero goals.
They were very focused onsustainability but because of AI
being so energy intensive,they've actually pushed out
those goals, and I know that ASEis very, very focused on its

(04:05):
ESG and sustainability efforts.
So how are you working to helpthese companies meet their goals
through developing thetechnologies that will address
the power hungry?

Yin Chang, ASE (04:18):
I think ASE is very committed to ESG.
The environment is veryimportant to AAC as a company.
So as a company we continue toreduce our carbon footprint.
But to help you know a companythat you mentioned, whether it's
Apple, alphabet or Meta I thinkone of the things we are trying
to understand is what is theiroptimum power requirement and

(04:41):
power efficiency.
And we're trying to understand.
You know, not every model isthe same.
Not every model requires thesame amount of heavy compute.
So how do we help them todesign from a packaging
standpoint, help their silicon,help their system?
I think one of the things thatadvanced packaging has done

(05:02):
recently is kind of highlightthe importance or the assistance
that we could create on theoverall system efficiency.
And so the system architect andsilicon architect and now
there's things called packagearchitect are now working
together and says you know,maybe I can use photons here,
Maybe I can use a differentpower delivery system here.

(05:22):
In a package and thecombination of it, I can reduce
overall package consumption.
I can maybe solve some of thethermal issue that come with
high power, and thencollectively I don't need liquid
cool, maybe I can do air coolso I can reduce the power drain
in overall system, not just theelectricity that go to drive the
compute silicon but the overallpower that required to drive

(05:44):
the whole silicon, but theoverall power that's required to
drive the whole data system.

Francoise von Trapp (05:47):
Right, because it's not just about the
silicon.

Yin Chang, ASE (05:50):
Yes.

Francoise von Trapp (05:50):
It is all of it.

Yin Chang, ASE (05:52):
Yes, because the more power you put in, the
harder it gets.
The harder it gets.
You need to figure out a way tocool it, and then if you need
chilled refrigerant, then that'seven harder.
So everything just multipliesin terms of power consumption.
So what ASC could do on theadvanced packaging side is maybe
try to reduce the power at thesource and then by reducing the

(06:15):
power at the source, thensubsequent power adder to
compensate for those additionalpower can be reduced.

Francoise von Trapp (06:23):
Okay, so ASC has a whole VI-PAC platform.

Yin Chang, ASE (06:28):
Yes.

Francoise von Trapp (06:28):
That has different elements to support
different steps or the differenttypes of advanced packaging.
How is that enabling thisheterogeneous integration era
that you're talking about?

Yin Chang, ASE (06:40):
VI-PAC platform is a platform that we
introduced in late 2022.
What it does is really providesa comprehensive toolbox for a
system architect or a siliconarchitect to look at what is the
best way to arrange the computeand the memory requirements.
So you enable chiplets, butthat's not all enabled, and he

(07:05):
also invents the conversion fromelectron to photons.
So we are trying to offer thecommunity the ability to be
creative, to be able tocomprehend what they need and
then reconstruct it to a pointwhere they reach the maximum

(07:25):
compute that they're achieving,they reach the power that they
need, and then they also reachthe maximum efficiency that they
can get.
So the VI pack is a platform,but it's more just a platform.
It's really this comprehensivetool that people can use to
create their next AIbreakthrough.

Francoise von Trapp (07:49):
Okay, so you've been talking about
electrons and photons and howthose can be adjusted or used to
improve the power efficiency ofan AI package, and I don't know
we call them AI packages.
I mean, I know we have AI chips, but then they also become like
subsystems, right?

Yin Chang, ASE (08:10):
Or packages.

Francoise von Trapp (08:11):
So, for instance, when you're using a
chiplet architecture, how wouldyou use that to improve power
efficiency for the AI chips?

Yin Chang, ASE (08:27):
I think for us is, you know, the power is
really the function of thecompute distance.

Francoise von Trapp (08:34):
Okay.

Yin Chang, ASE (08:34):
So, as you get closer and closer together, the
less power required to drive,less current required to drive
it.
So the 3D, the 2.5D advancedpackaging solution that we offer
as part of the VI-PAC solutiongive us the minimum distance
required for the compute tocommunicate with a memory die,

(08:56):
communicate with the IOcontroller.
So all this ability for us toreduce that compute distance is
what the VI-PAC toolbox provides.
And in addition to that we arenow able to offer, you know,
co-packaged optics that allowsome of the electrons to convert

(09:17):
to photons.
So not only I gain bandwidthbut I reduce the number, the
power that's required to drivethat distance.
So that is kind of the goal forus is, you know, by not just
one solution but adding multiplepieces, we can achieve the
maximum power efficiency.

Francoise von Trapp (09:36):
So where do you use the electrons and where
do you use the photons?

Yin Chang, ASE (09:41):
So the electrons are typically used
within the compute matrix, right?
So you have SOC die on top ofthe IO driver top of a memory
controller, soc die on top ofthe IO driver top of a memory
controller.
On top maybe there's peoplelooking at 3D memory directly
onto the compute structure.
So all those things areconnected through silicon bias,

(10:05):
copper pillar.
Those are all electrons.
Now, the minute you need tocommunicate with outside maybe
another GPU, another CPU, youknow that is probably best to do
with photons, because insteadof looking at nanometers or
microns now, you may look atmillimeters and millimeter to
centimeters.

(10:26):
Those are the ones that drivemore power, generate more heat.

Francoise von Trapp (10:30):
That's a larger pipeline.

Yin Chang, ASE (10:31):
Yes, and it's also a longer pipeline.

Francoise von Trapp (10:33):
Okay, and so how do you convert electrons
to photons without it gettingtoo technical?

Yin Chang, ASE (10:41):
Well, the silicon-photonics is really a
combination of three silicon.
There is electronic IC, there'sa photonic IC.
That EIC and PIC combinationbasically is the conversion of
electron to photon.

Francoise von Trapp (10:54):
Okay.

Yin Chang, ASE (10:55):
And then, once the photon is converted, they
transmit it through a laser dial?
Okay, and that's sent through afiber optic?

Francoise von Trapp (11:03):
Okay, right , and then does it get converted
back to electrons when it getsto the next.

Yin Chang, ASE (11:08):
Yes.

Francoise von Trapp (11:08):
It's almost like a relay.

Yin Chang, ASE (11:10):
It's like a relay.
So basically we shine lightdown through a pipe and then,
once you reach the destination,it can convert it back to
electrical signals and theelectrical signals get processed
at that site and they can sendthat result to the next compute
die in the chain.
So if you look at NVIDIA NBL72,they connect 72 die together

(11:33):
and the communication betweenall those silicons are through
photons, not electrons.

Francoise von Trapp (11:38):
And so does the photon transmission.
Is it faster and generate lessheat than the electrons?

Yin Chang, ASE (11:48):
Yes, it requires less power, less power.
Okay, it's less power-hungry ormore power-efficient, less
power, less power.
Is less power hungry or morepower efficient?
And then, as far as heat isconcerned, there's less heat.
In terms of going through fiberoptics, you know in terms of
what the overall system willrequire.
The more optical solution youuse, the wider the pipe, the

(12:11):
larger the bandwidth, right.
But to achieve the powerefficiency you've got to get
close to the die right.
So that's where the CPU comesin.

Francoise von Trapp (12:22):
So are there other areas besides
co-packaged optics that you areworking towards?
That will also help reduce thepower needs of AI.

Yin Chang, ASE (12:32):
Well, I think we're working on these
integrated voltage regulatormodules where a lot of the first
stage and second stageregulator are putting very close
to the silicon itself.
So instead of driving 48 to 12volts to 8 volts to 1 volt to
0.8 volts, we're trying to putall those conversions as close

(12:55):
to the silicon as we can and bydoing that I reduce a lot of
power loss by doing all theselong-distance conversions, and
the more power loss I get, themore power I need to put in to
recover.
So that's one of the things forus to create more power
efficiency through the overallpackage system, and that's

(13:18):
another way that we're hoping toreduce the power requirement,
giving a given level of compute.
What we're seeing in themarketplace is people say, well,
if you can give me moreefficient, what I want is more
compute.
Right, okay.
So this insatiable requirementfor compute acceleration is one

(13:43):
of the causes for the problemthat you described earlier.

Francoise von Trapp (13:46):
Okay, and one of the other areas I've
heard that are still a challengeis thermal issues.

Yin Chang, ASE (13:51):
That basically heat relates to power loss I
think thermal is a function ofthe power.
That you put in.
You know, because more electronyou put in through any kind of
structure, you will end upgetting some heat, unless
there's zero resistance for thatto occur.
I think for us, from advancedpackaging point of view, the

(14:19):
idea for us is really trying tomaximize the power efficiency
and then hopefully the amount ofcurrent we need for a given
compute can be more modest,right?
So then the amount of heat thatit generates subsequently would
not be quite as high.

Francoise von Trapp (14:33):
Okay, so how close are we to solving
these challenges?

Yin Chang, ASE (14:39):
I think for the power regulators, I think we
are within the next few years,and then for CPOs, I think for
leading edge, there's alreadyannouncement from NVIDIA and AMD
as such that they are activelylooking to implement that in the
next generation data centerstructure.

(14:59):
So I think within the next fiveyears you will see, I think, a
dramatic increase in efficiency,the way we want to have in
terms of overall system.
So from the package point ofview, I think there are going to
be more compute using photonsor more communications in
photons, and then the powerdelivery will be much more

(15:21):
efficient, as we deliver powerdirectly into silicon.

Francoise von Trapp (15:25):
Okay, you know, I think back to I don't
know, 10, 15 years ago, where,before we even knew what AI was,
or, you know, everybody talkedabout AI, but we never expected
it to be as big as it is already, and the main driver of all of
this technology was oursmartphones.
And so, you know, at that pointeverything was small form

(15:47):
factor, smaller chips, and nowwe're looking at these larger
size die.
How does that impact, fromASC's perspective, the decision
to like, develop?
How do you know what's comingnext and how do you leverage all
of your knowledge to addressall of these different markets?

Yin Chang, ASE (16:07):
I think for us is that you know the high
performance computing and AI hasdrive ASC to accelerate our
innovation.
So the AIPAC is an example tosuch innovation create, like we

(16:31):
mentioned earlier, as many toolsas we can to help the HPC and
AI silicon architect or systemarchitect to solve some of their
problems.
And what we found is that someof the tools we create may be
able to periphery down to themore consumer-based products and
some of the AP or applicationprocesses you mentioned earlier
may benefit from the developmentthat we're currently doing on

(16:52):
the AI side.
So the package may not growforever, but some of the tools
we can extract and scale thatfor large consumer market.
That cell phone could easily beone of them.
We're already looking at some ofthe cpu design for laptop that
can leverage some of the toolthat we use you know, namely fan

(17:14):
out solution.
So I think that everything thatwe're doing for the most
advanced package requirement areable to periphery down to
affect our daily lives.
So maybe it's even as much asrobotics that people talk about,
right?
So all those things requirehigher compute, maybe not into

(17:36):
an AI type of standard, butthose technology can easily
migrate or transfer to a moreday-to-day consumer product.

Francoise von Trapp (17:46):
And that's the cycle.
We see right Things alwaysstart where it's more expensive
and high performance, becausethe return on investment is
there, the high performancecomputing.
It's not at the consumer level.

Yin Chang, ASE (17:58):
Right.

Francoise von Trapp (17:59):
But you know, I was just thinking.
Now, as you were talking, I'mthinking, but we're talking like
these big AI die right.
What needs to happen to get thepower of that and we've seen it
happen with other things, rightlike the power that of that
kind of compute to be reducedand shrunk to, to work in a
smartphone down the road peopleare already working on shrinking

(18:23):
the overall silicon.

Yin Chang, ASE (18:25):
So you know, as we go from the 2 nanometer to
1.8 nanometer and below that,the amount of voltage required
to drive those circuits can bereduced right.
So it can help in terms of somesort of battery operation that
we need for cell phones.
So, but all those things stillrequire a lot of data, a lot of

(18:50):
bits.
So how?

Francoise von Trapp (18:50):
do you?

Yin Chang, ASE (18:50):
connect the memory to those compute devices
and some of the advancedpackaging solution can help
solve those problems.
So I think it's really acombination of the advances in
Foundry silicon design and acombination with advanced
packaging to solve ability totransfer from right now high
power, high current solutiondown to maybe low power, you

(19:13):
know, microcurrent solution.

Francoise von Trapp (19:15):
So, for instance, if we're using our
phone, now, right and you can doan AI search, right, like you
go on Google and it's, butthat's not on the phone.
That is in the data center,that AI function.
We don't really have AIfunctions yet in our handsets,
do we?

Yin Chang, ASE (19:34):
Not in the way that people is not chat GPT, for
example right so.
But I think people areenvisioning where a subset of
the AI function can reside onyour phone that can access your
calendar, access your email,access what's on the phone
itself and do a limited amountof processing Right.

(19:55):
So that will be potential AIevents where they help your
daily life.
You don't need to ask verycomplex questions, but it helps
your daily life activity thatyou will know or anticipate what
your daily activity, that youwill know or anticipate what
your daily activity may be andthen recommend certain things or
help you prepare certain things.
So all those things are not asdata intensive but require your

(20:20):
own data.

Francoise von Trapp (20:22):
So then it's also more encapsulated,
because you're not reallypulling data from everywhere,
you're just working with whatyou have locally.

Yin Chang, ASE (20:32):
It's become private, private Okay.

Francoise von Trapp (20:34):
So, okay, my last question, because I'm
now tangential, but I just keepthinking of these things.
We hear a lot about edge, ai atthe edge.
So when we're talking aboutedge, we're talking about our
personal devices, right, for themost part.
How far are we away from that?
I know there's like AI enabledPCs, now right, and there's AI

(20:55):
software like where does thatfit into the whole AI picture
that we're looking at?

Yin Chang, ASE (21:03):
I think they are closer than we think.
I think there are already peoplewho are downloading Lama Meta's
open source model directly ontotheir laptop and run certain
version of Lama to answerquestions using limited amount
of data from outside, or maybejust on their own hard drive.

(21:34):
So I think running AI functionon your laptop or not that far
away.
I think the question is howuseful it is for an individual
person.
It's always better to run thedata set as large as possible,
so running off the cloud itmakes not more sense.
Runs a data set as large aspossible, so running off the
cloud, it makes not more sense,right.
But if you want private AI,then you want to run it off your
devices, because that's yourdata.
You want the results of it tobe private to yourself and the

(21:59):
action it takes to be only foryou.
So I think those are the nextstep for AI, from my personal
view, is that will be the youknow, the agentic AI, where you
know they're reasoning aboutwhat you may want and then
they're deciding based on yourpreferences and they can take

(22:19):
action to execute thosepreferences.

Francoise von Trapp (22:24):
So some of the major markets right now that
are benefiting from AI arethings like the medical market
and, you know, robotics andindustrial.
Is there a market where youthink AI should not be used?

Yin Chang, ASE (22:39):
I think AI can't be used, you know, like
anything, in moderation.
I think that you know, as faras you know, medical research
and looking for new protein.
I think AI will be extremelyuseful in terms of some of the
morality questions about whatshould AI do in terms of genomes

(23:02):
and so on.
I might not be the best personto ask that question.
But I think AI can be used inalmost everything in your life,
but probably in certainmoderations.

Francoise von Trapp (23:14):
And some areas it's ready for it and some
areas it's not.

Yin Chang, ASE (23:17):
Yeah, I think you know.
I'm not sure if an individualwill relinquish all their
decision power to GenX AI to sayI'm just going to let AI decide
my day right and then you willplan it all out for me.
So I think there's still acertain amount of
self-empowerment that peoplewant to have over their life,

(23:40):
but AI can definitely be a veryhelpful assistant to those lives
.

Francoise von Trapp (23:46):
So now, where should people go to learn
more about ASE and the IPAC?

Yin Chang, ASE (23:51):
I think they should go to aseglobalcom and
please search us for LinkedInand ASE Global.

Francoise von Trapp (23:56):
Okay, great .
Thanks so much.
There's lots more to come, sotune in next time to the 3D
Insights Podcast.
The 3D Insights Podcast is aproduction of 3D Insights LLC.
Advertise With Us

Popular Podcasts

Stuff You Should Know
24/7 News: The Latest

24/7 News: The Latest

The latest news in 4 minutes updated every hour, every day.

Crime Junkie

Crime Junkie

Does hearing about a true crime case always leave you scouring the internet for the truth behind the story? Dive into your next mystery with Crime Junkie. Every Monday, join your host Ashley Flowers as she unravels all the details of infamous and underreported true crime cases with her best friend Brit Prawat. From cold cases to missing persons and heroes in our community who seek justice, Crime Junkie is your destination for theories and stories you won’t hear anywhere else. Whether you're a seasoned true crime enthusiast or new to the genre, you'll find yourself on the edge of your seat awaiting a new episode every Monday. If you can never get enough true crime... Congratulations, you’ve found your people. Follow to join a community of Crime Junkies! Crime Junkie is presented by audiochuck Media Company.

Music, radio and podcasts, all free. Listen online or download the iHeart App.

Connect

© 2025 iHeartMedia, Inc.