加载中...
共找到 16,971 条相关资讯

A new report from WPP Media says global ad revenue will reach $1.14 trillion this year.

On Wednesday, the Fed will announce its interest rate decision and release the latest Summary of Economic Projections, followed by remarks from Fed Chair Jerome Powell.

U.S. stock futures were little changed Sunday, as investors anticipated another interest-rate cut by the Federal Reserve later this week.

Treasury Secretary Scott Bessent said that it's been a "very strong" holiday season for the economy and predicted that the U.S. would end the year at 3% real GDP. Bessent said American consumers' views on affordability have been affected by media coverage of the economy.

Markets widely expect a Fed rate cut this week, but the FOMC is unusually divided, with up to five dissenters likely. Oracle (ORCL) and Broadcom (AVGO) earnings are key, with both riding strong AI tailwinds and high expectations for revenue growth.

Markets are expecting the Federal Reserve to cut interst rates at it next meeting, but some traders predict volatility.

Late-November rally put markets back on their feet. Plus: A Financial Flashback to 60 years ago, when LBJ clashed with the Fed.

This FOMC meeting could catalyze a regime shift in long-end rates, potentially driving the 30-year Treasury rate toward 7%. Market focus will be on the degree of FOMC dissent and the dot plot, not just the expected 25 bps rate cut.

The Fed is likely to cut the Federal Funds rate by 0.25% at the December FOMC, moving to a 3.50-3.75% range. However, the Fed could be on a long pause after the December cut, which could be interpreted as hawkish, and thus the market reaction is likely to be negative.

Each week, Benzinga's Stock Whisper Index uses a combination of proprietary data and pattern recognition to showcase five stocks that are just under the surface and deserve attention.

Recent gains reflect more than just optimism about artificial intelligence.

The week draws to a close with risk assets largely buoyed by the prospect of an interest rate cut from the Federal Reserve. The US dollar was slightly weaker on Friday but generally stayed within its recent trading range against other major currencies.

The S&P 500 came off its best week since May with another solid performance. The 50-day moving average has been above the 200-day moving average since July 1st.

SEC Chairman Paul Atkins joins 'Mornings with Maria' at the NYSE to discuss the rise of digital trading, the shift on Wall Street, and why U.S. markets remain the world's benchmark. #foxbusiness #morningswithmaria

The S&P 500 eked out a 0.3% gain last week, showing signs of waning momentum after a strong prior surge. Investors rotated into NASDAQ, AI-related stocks, Tech, Communication Services, Energy, and High Beta names, while reducing exposure to Emerging Markets, Utilities, and defensive sectors.
Sunny Lin: Good morning, good evening. Thank you all for attending the conference call with ASE. I'm Sunny Lin, covering Greater China [ Semis ] at UBS. It's my great honor to host Mr. Yin Chang, Executive VP of ASE for Sales and Marketing. He will be sharing with us how advanced packaging innovations are evolving to support cloud AI technologies. ASE's IR team, Ken Hsiang, Iris Wu, Chiayi Liao will also be on the line as well to take questions with Yin toward the end of the event. Through the session, if you have any questions, please feel free to e-mail me the questions at sunny.lin@ubs.com. So with that, let me hand over to you, Ying, for the presentation. Yin Chang: Thank you, Sunny. Good morning, everyone. Thank you for this opportunity to share ASE's view on advanced packaging and how we are driving AI forward. So next page. I think this is really given that AI is really here. AI application is changing how we look at health care, telecommunication, retail, financial services. And this will dramatically increase our AI economy from $189 billion in 2023 to over $4.8 trillion in 2033. This is dramatic 25-fold increases. And this is generated by all the data that we are -- as a consumer put together for AI to consume, learn and then inference for all our future AI application. Next. If you look at how AI is spending, we all know that the AI spending is exploding. In Q2 of 2025, we hit a new high of $87 billion out of only 8 major hyperscale builders from Alphabet to Oracle. The dramatic increase in amount of CapEx for revenues also broke 45% in Q2 2025. We expect this trend to continue, and this is great news for our ASE semiconductor exposure. Next. So if you look at how the data center CapEx and AI semiconductor spend, majority of the spend are in compute, which is where we're going to focus a lot of our talk, followed by networking and memory and followed by power. We also will address a little bit about the power concerns in the AI futures. Next. So this chart shows a very interesting thing. So if you look at 2020, '21, '22 and even '23, the amount of revenue per device kind of correlate very well with the volume. So as you sell more, the value or the revenue increases. But starting from 2024, that trend dramatically changed that even though we don't sell as much, but the value or the money people wanting to spend on the devices dramatically increases. So even though the growth for the volume is modest, the amount of revenue we generated are tremendously higher, which means that the value add companies such as ASE can put into an AI system also are being monetized and being valued in the AI equipment market. So this shows a tremendous promise for us going into the future. Next. So what are the key demand challenges in AI? Well, obviously, performance is the #1 requirement. The dramatic increase in compute through the latest large language model push us to create higher network bandwidth in memory and the amount of capacity in HBM that we put on each particular GPUs or accelerated chips. And this creates a problem in area because the number of chip we put down, so if you put from 4 HBM to 8 HBM to 10 HBM to as much as 16 HBM, the area of that package dramatically increases. So now we are looking at 100 by 100 millimeter square and how do we deal with those large areas. And the next is power. For power, we are now looking at how do we power so many chips within just one blade to now thinking 72 blades together or 72 chip together or up to 154 chip together in the latest NVIDIA requirements. And obviously, if you put in power, the thermal will be the next key consideration for the packaging challenges in the AI era. Next. One more tab. So this kind of shows you why compute is being driven so quickly. And this kind of shows you the data set and the compute performance by various large language models. And it's growing at 3 to 4x per year. And at the current rate, we're looking at 1.7x in number of chip quantity just for the AI industry. And then we need to improve performance of each chip by almost 1.5x per year, which is significantly faster than Moore's Law, which is a challenge for the chip designer, the chip foundry and -- but this is where advanced packaging can really come into effect. How do we put everything together, achieving this performance requirement without all the benefit of Moore's Law. And this is the benefit of advanced packaging. Next. So just look at the AI model for this. You kind of see just from April '23 to October '25, which is a little bit over 18 months, the performance has increased almost 50% in terms of how fast the model has improved from OpenAI, from X AI, from Alphabet. And this is what the compute consumptions are, and this is what the requirement is putting together to the silicon players and to ASE as a company to come up with solution that can provide the compute that's ever increasing and insatiable demand of compute power by this open AI or AI model requirements. Next. So what we talk about with the compute, if we're able to achieve the amount of compute that we see in the previous slides in the various models, what you really require is the amount of data you need to feed those AI accelerators. So what we kind of see, if you look at the HPC, the high-performance computing HBM road map, the number of HBM per each generation continue to grow. So from the left-hand side, which is the MI300 and 350, we didn't put the 450 on there. But if you look at the medium road map, they will be reaching 16 in Rubin 300. And in the various other models, we're also looking at least 12 HBM in the coming year. This is we drive the area portion of our challenges as advanced packaging. So if you look at the very bottom, on the NVIDIA Rubin Ultra, the substrate area is 153x77. The interposer size is 124x 50. This is a tremendous advanced packaging challenge for us to put this many die in such a small space. Next. So you look at the HBM integration trend, the reason we are driving to HBM4 is trying to leverage the faster and faster bandwidth and allow us to transmit as many data as possible up to 1.5 terabytes per second. And by putting 16 HBM into 3 chiplet, which is that cartoon I show, this creates a much larger interposer requirement and also require more RDL layers to connect all these chips together. And this is the advanced packaging. And this actually creates a challenge but also opportunity for system architect or chip architect to create a unique chiplet and memory solution to support the next-generation AI compute requirements. Next. So the key challenges for large-size modules, what the big challenge is, as it gets bigger and bigger, the number of chip per wafer for 300 wafer drops significantly. So if you go by 100x100, there's already only 7 modules or 7 die per wafer and keep dropping as the package gets bigger and bigger, interposer get bigger and bigger. So this is the challenge for us is how do we maintain the yield while reducing the number of chip or interposer per wafer. So we have a solution for that in the coming slides. Next. So with the compute solution, one of the challenges for us is how do we deliver power. Power is very key to the success of the AI chipset. So we all know that when you route copper through the substrate, you have routing losses. And the distance from the VRM to the chip is very important. So how do we reduce the distance and reduce the voltage losses is a key challenge for us. So for us is how do we put this power solution as close to a silicon as possible. And what are the solutions that we can come up with that can put up a vertical voltage regulation onto the package itself. So this challenge is something that we need to figure out and deliver precise power into this complex chipset and HBM structure. Next. So if you look at the overall AI compute rack power, what you will see is that if you look at -- in 2020, we only look at 10 kilowatt per rack. But by 2024, you look at Blackwell, it's already 120 kilowatt per rack. And this is really driven by a number of chips within that same rack. And then if you look at what the future holds, we're already seeing 600-kilowatt rack solution and a megawatt rack solution will not be far behind. So one of the key things for us is how do we deliver this solution not only to the chip itself, but also find out the power solution to the rack. And I think that with our experiences in some of the high-voltage applications in other industry suit us very well in trying to create a power solution for this higher and higher voltage requirement into the most complex rack AI that will be coming into the marketplace in a very short time. Next. So with power thermal. So we are looking at how do we do thermal solution in the very increasing watts and voltage environment. So this chart is courtesy of AMD. And what it shows is if you look at the red bar on the left, that is the CPU power. As we go in time, the power gets higher. And then if you do the green bar, that's a GPU power and then GPU power actually gets increasing into 1,500 watts. But ironically, the higher the watt, the temperature need to be operated actually drops. So if it's lower voltage, you actually can run the chip hotter. But with higher voltage, you actually need to run the chip cooler. So create even bigger problem for us in trying to run the chip at optimum temperature with an increased power consumption. So this is something that the industry needs to work through and ASE is going to participate in how do we also work on the overall package thermal consumption and thermal requirements. Okay. Next. So one of the things that we will do for the compute problem that we discussed earlier is we leverage the VIPack that we announced back in 2022. VIPack is a collection of advanced packaging technology from FOPoP, package on package to 3D ICs to FOCoS-Bridge to package optic and to FOCoS SiP and just FOCoS, which is the fan-out chip on substrate. Next. And what we're really focused right now is 2 type of VIPack. One is the FOCoS, which is fan-out chip on substrate. This is what's very common today in a lot of the AI solution. And you kind of see some of the cross-section that ASE has done to create the latest chiplet architecture or heterogeneous integration solution that combine whether it's I/O buffer die with graphic accelerators or graphic accelerator with HBM. And then to the right, FOCoS-Bridge is our next solution or the solution of choice for some of the higher density solution where we are using a silicon bridge between HBM and a graphic accelerator to maximize the I/O count between the connection while minimize the RDL that is needed to route between those 2 die or collection of those die. Maybe there will be 3 GPU with 16 HBM. And kind of show you the cross-section between the Bridge and a C4 pump, and that show you that our pitch can be as low as 130 microns. Next. So this just show you the FOCoS extension platform, specifically talking about Bridge and they kind of show you the package size that go to 100 by 100 millimeters, and they kind of show you various constructions or various opportunity for the chip architect to create. It kind of show you that we can have maybe a GPU accelerated die with some memory controllers with memory itself or just parallel GPUs with memory, like a device [indiscernible] or a collection of chiplets with I/O, SRAM GPU, neural network chips, all connected through FOCoS-Bridge solution where the Bridge are connected -- are connecting the chip to each other and to the next die next to it. So these are the opportunity that we see with FOCoS-Bridge that creates a tremendous amount of creativity for the AI chip of the future. Next. So one of the challenges for that is as the package gets bigger, the utilization on a 300-millimeter wafer start to drop. And we mentioned earlier, if once you get to 5 or 6 reticle size, the number of those chips per wafer is dropped down to 8, 7, possibly 6 and that's only 57% of the utilization. So we really need to figure out a way how do we increase our utilization. So ASE has been working on a panel solution. We have demonstrated 300-millimeter panel and also 600-millimeter panel that shows that we can increase the overall utilization from 57% up to 87%. And this dramatically allow us to produce this complex solution in scale. And that is the key for ASE is how do we scale this into as high a volume as possible. Next. So this shows the actual example of panel FOCoS-Bridge. This shows basically 2 chiplets with HBM onto a large panel. So you have 2 SoC dies and SoC 1. So this creates -- and then we put 10 chiplet with 10 silicon bridge onto that one section. And the middle chart shows you the whole panel, how we put it together. And this is the fan-out construction and it's by laser direct via solution. So it's -- this is an increase in number of unit per panel versus a wafer on the Bridge construction. Next. So as a panel road map for ASE, we're looking at 310x310 for the HPC and AI for fine-pitch for 2 micron and 2-micron line space with Bridge and IPT. And then we are looking at large panel for fan-out MCM or some people call it wafer MCM that allow us to do mobile application or edge AI application that doesn't require a fine line -- line and space and allow us to do the full fan-out RDL as a substrate. So you actually create a very thin multichip modules with complex RDL underneath. So you kind of show you the 600-millimeter fan-out MCM and also the chip glass and Bridge solution that we mentioned earlier that give us the 310 and then moving to 600. Next. So once we are able to put all the chip onto a panel or wafer or modules, the next thing we really want to look at is how do we put the power solution to it? How do we drive all the chip with the necessary power with the first and second stage regulator. So we have created powerSiP that basically allow us to put all of the first and second stage regular directly underneath the substrate. So we are creating the minimum amount of distance between the power source and the silicon itself. So instead of putting a side-by side, as we show on the right-hand side, where we take a 12 volt down to 1 or 0.8 volts or even just do first stage and second stage, now we are actually putting both of them underneath as a vertical regulated modules, what we call a powerSiP that allow us to deliver the power at the most -- closest to the silicon and reduce the overall loss and then achieve a maximum efficiency of the power delivery. Next. And then this kind of shows you the latest thing for the power solution on data center. So if you look at today on top, if you look at the alternating current from the grid to the data center. And basically, we're dropping it from plus or minus 400 down to maybe 480 or lower or plus or minus 220, so 440 solution onto the data center and driving it at 400-volt direct current. And one of the things that will be more efficient is convert that directly from plus or minus 400 AC current directly down to 800 direct current. And then using a solid-state conversion and then drive the whole backbone of data center using higher voltage. And this allow us to create more power efficiency throughout the grid and also creates growth in the overall infrastructure going to the future. And a simpler distribution system and with a fewer point of failure will create a more robust data center solution. And this is also aligned with overall thinking in terms of leveraging what we already learned in some of the other industry that also leverage the 800-volt systems. So ASE is in a prime position to work with customers in developing this 800-volt DC systems. Next. So this kind of shows you one example of using gallium nitride to our silicon carbide as a primary chip solution and module solution to create a monolithic low-voltage conversion into the silicon itself, but also allow us to drive from 48 volt down to 12 or 6 or 0.71. And this actually creates a better solution, a more solid-state solution instead of going through AC conversions. And for us is this gives ASE another opportunity to create more value within the data center ecosystems. Next. So driving power itself through electrons is one challenge. But another way to solve that same challenges is trying to convert the electron into photons. So ASE has put in a tremendous effort in trying to work on full-package optic or basically trying to communicate data transfer through photons. And we believe that is the future of data center, the combination of electron and photons. Next. So this kind of shows you all the various toolbox that ASE has demonstrated through the passive alignment for fiber attach to creating a cavity through laser direct edge to chip-on-wafer trying to put electric IC on top of photonic IC through a fan-out POP solution. You kind of show the various cross-section, and that creates a silicon photonic engine that can be used in part of the CPO solution that will show a little later. And then give you the various way to put in the laser diodes that provide the laser source for the photonic. And obviously, the submicron accuracy is important for all of the die attach or the chip-on-wafer integration. So these are the various tools that ASE already developed that can help our customers to create the next-generation optical solution for the AI hypercenter. Next. So there are 3 key challenges in trying to create optical engine onto a CPO. So one is really to deal with warpage. There's warpage on the optical engine. There's a warpage on the substrate, which is organic substrate. And then all these things have an impact the way we do fiber attach, whether we do active alignment or passive alignment, these are the key challenges for us to put together. So you kind of see the large different ring on top of this CPO demonstration that we did for the customer. And these are demonstrating that we could show on the next page. Next page. So this is the thing that we did for CPO test vehicle, where you kind of see the network IC in the middle with different optical engine, and that allow us to connect 8 different optical fiber onto this switch solution. And you kind of see what we did with the [indiscernible] rings to maintain the warpage on -- not only the optical engine, but on the substrate itself. And this is really a large package of 75x75 package size, and this is all joined by copper pillars with the fan-out POP solution. Next. And this kind of shows you just kind of the wealth of solution that we are offering in terms of toolbox, whether it's ASIC on chip on substrates, whether it's optical engine, and it kind of show you on the same thing how each of the optical engine and the substrate are connected through copper pillars and copper bumps. And so those are demonstration that ASE can execute a large panel network, basically CPO solution in the next-generation AI solutions. Next. So why do we want to do that? Because we believe that the high-density RDL packaging such as VIPack is only one solution that we can do, but that's not everything we can do. So we need to put in the photonic system. By putting the photonic system, we can dramatically increase the overall compute performances while reducing the power because we don't have the same losses through photons as we do electron through copper wires. So with the combination of the high-density RDL such as VIPack and the CPO that we showed you earlier that this can dramatically increase the overall compute to meet the latest LLM compute requirements. Next. So lastly talk about thermal. So the thermal is that we use the same chart with GPU power over 1,500 with CPU power over 600. There are many ways that we are looking at it. So today, ASE is really looking at system solution, which is the standard cold plate that sits on TIM 1 that sits on top of the TIM 2 sits on top of the heat sink that sits on top of the die itself. So right now, we are looking at various material that can improve the overall heat conductivity between the silicon and the cold plate itself. But we are also examining the potential of the silicon solution where the coolants actually are directly in touch of the silicon itself. So instead of have 2, 3 or 4 different thermal interfaces, we are able to bring the coolant directly to silicon to dissipate the heat and allow the chip to run at its optimum temperature. And this is the next generation where you kind of see the solution is migrating from system to chip, which offer ASE another opportunity to develop silicon level solution and able to produce the next-generation compute power that's needed for the future AI requirements. So for the thermal TIM solution that we have talked about, today, we're looking at standard dispense method. We have developed the graphite method. We have done the solar -- that you kind of see the ability for us to increase the overall thermal conductivity from below 10 to right around 86. So we are looking at various level trying to improve the thermal conductivity between various interfaces. But as I mentioned earlier, the potential is not just improving the thermal coefficient in the interfaces, but also bringing the coolant directly onto the silicon itself, and that will be the next-generation development. So we look at the overall packaging innovation and the packaging architecture and technology, if we look at performance, we already see the latest AI model push our overall performance by 2 to 7x. And this is what is required for the AI chip to meet. And then for that, we need to drive the memory. And for the memory to increase, then we need to drive the area. And once you put the compute chip together, then we need to figure out how to deliver the power suite or the precise power directly onto this array of silicon on top of the modules. And then once you put in the power, obviously, the thermal will be the next consideration. And so with the packaging itself, we kind of show you how we are able to use the heterogeneous integration that combines various functions, whether it's CPU, GPU, XPU or various I/O or memory chip solution along with HBM put together in whether it's in chiplet or in a fan-out solution with bridge or using 2.5D silicon interposer. And if the package get too big, then we need to look at 300-millimeter panels or 600-millimeter panel to leverage the overall efficiency and maximize the yield and also the scaling of the overall solution. Then with power, we are demonstrating the vertical voltage regulators and that is the power and we try to put in the power regulator as close to silicon as possible and trying to create the backside power that's needed. And if I then convert electrons into photons like CPO, that also reduce the overall power consumption in a given compute. Obviously, when we give more compute, then the CPO requirement will also increase. And last thing is the thermal. So thermal is something that ASE is looking into in terms of how do we figure out the next-generation cooling structure beyond the thermal interfaces and the thermal interface material that we already are producing through silicon microchanneling or possibly even new material set. And this gives you a summary of what we are looking at in terms of innovation to fulfill the compute needs for overall industry. So in summary, we kind of see AI and data continue to fuel the semiconductor innovation. The proliferation is given with the amount of money that the overall industry is asking us to produce the next breakthrough in terms of solutions, and we are accelerating through really the heterogeneous integration advancement. We're putting various types of function die together, various size of function die together. We're putting side-by-side. We're putting on the 3D format. And this type of heterogeneous integration is the innovation, I think, that fuels the AI data growth. And last is, we truly believe that package creativity is the enablement for AI growth or AI path. So it really helps us in terms of enhancing the functionality and also improving the overall efficiency of any particular compute silicon solution. With that, I thank you for your attention and time. Sunny Lin: Sure. Thank you very much, Yin, for your great presentation. So now let's move on to Q&A session. So once again, if you have any questions, please feel free to e-mail me at sunny.lin@ubs.com. So let me kick off. So maybe, Ken, first question for you. Since we have you, lots of questions on how ASE's LEAP and testing segment will scale going to 2026. In October, management did guide at over $1 billion sales upside going to 2026 on top of this year's USD 1.6 billion. And so, could you perhaps share with us how the outlook has evolved going to 2026 after you reported? And then how we should think about the ramp for the business across maybe outsourcing, your full process CoWoS and final test? Kenneth Hsiang: So the commentary for 2026 thus far, we have not given a tremendous amount of color as you've mentioned. The only real comment as of now that we've talked about is that leading-edge advanced packaging will be growing by more than $1 billion next year. The components of that are -- haven't been particularly talked about, but I think it would be fair to say that they are being led by our traditional LEAP services, meaning on substrate and also, to a certain extent, the testing related to such devices. Towards the back half of the year, we should see a much more pronounced ramp-up in our full service type applications and services. Sunny Lin: Got it. Also, in terms of the ramp of full service type of package going to second half of 2026, if you may, how should we think about the technology? Will it be driven by maybe more traditional FOCoS or will it be a combination of FOCoS and FOCoS-Bridge? And how should we think about your technology readiness for FOCoS-Bridge? Would you say now the yield has reached a good level and therefore, you are seeing increasing customer engagement? Kenneth Hsiang: FOCoS-Bridge, we do believe to be, as Yin mentioned in his presentation, quite an important part of the overall ramping in terms of AI system architecture at the chip level. It does provide incredible increased performance between the processing unit and the memory dies. So this is something that is particularly important for our ongoing ramps going forward. We have not talked about, again, the makeup of 2026, but the full process, I think many sell side, including yourself, have written up articles on this particular trend. But we do not -- again, we don't have any new information, but we do believe that this should be increasingly important. In terms of yield, we are -- we have -- we don't generally disclose yield, but we do have full a process work that we are completing or providing right now during 2025. 2026, we should see maybe a different set of customer products, maybe ramp towards the back half of the year. Sunny Lin: So on margin for LEAP and testing, the company has got a higher margin and so accretive versus IC ATM. But IC ATM gross margin is at a low base for 2025. And therefore, when management talk about the segment being margin accretive, is it fair to say it's higher even compared with the high end of the range for IC ATM structural gross margin being about 30%? So the segment gross margin should be over 30%. And so that's the first part of the question. And then the second part will be, how should we think about the margin outlook for the segment going to 2026? Should we expect maybe better margin given larger scale, maybe better yield and also ramping of full process? Kenneth Hsiang: So leading -edge advanced packaging is accretive towards our structural margins. So structural margins, we generally talk about in terms of maybe a 70% overall utilization being tied to a 24%. The trough of the structural range and then full utilization at around 85% or so tied to a 30% ceiling margin in terms of the structural range. But leading-edge advanced packaging in total, all the components do create incremental margin or are accretive to the overall structural mix, right? So that would mean that each of those components do -- are higher, so to say. In terms of what we're looking at for next year, again, we're not commenting a lot on that. But in total, we do believe that given that the -- ideally, the FX headwinds are behind us, and we should see a much more friendly environment for our margin structure. So this year, I think we did show a decent amount of margin recovery, especially if you do take the FX component out or adjust for the FX component. So next year, we should continue to see the overall margin environment improve. And then I think Joseph talked about next year having full year margins well within the structural context. Sunny Lin: Got it. So maybe moving on to a question on panel-level packaging for HPC application that you talked about. And so where is ASE in terms of the technology readiness for, let's say, 300 for HPC? And based on your current technology development and also client engagement, when do you think we should see the first wave of product migration? Will it be maybe 2028 that people talk about? Or do you think it would take a bit longer? Kenneth Hsiang: I think panel is part of an overall set of delivery, set of services and products that we offer. In terms of moving towards an overall panel service and full readiness, I think right now, we have equipment coming in during this year. We have some level of qualification for next year and then a very -- maybe a minor level of revenue towards the end of next year. But as of this point, we haven't seen a mass migration yet, but that's not to say that this won't happen. I think there's a lot that has to do with the overall ecosystem being ready, meaning machinery, meaning things that may not be within our control. But again, this is part of an overall view. I think as panel does become more ready, I think it provides an opportunity for leading-edge advanced packaging type services to permeate into different levels of products, not just within this the very peak of the pyramid, so to say, in terms of electronics technology or electronics usage. So maybe we might see things kind of drift off towards maybe a mobile application or other applications. And I think that might be -- might allow for leading-edge advanced packaging to grow even faster. Sunny Lin: Got it. So Ken, if we take a step back and look at CoWoS or FOCoS it may be fair to say ASE ramp maybe a bit later than some of your peers. And so now with the potential migration to panel level for HPC, would you say ASE started working very hard to be able to address the first wave of opportunities if it comes? Kenneth Hsiang: I think our position has always been fairly steady. We like to do things when -- we don't like to get very much ahead of the technology. I think those situations result in less than optimal returns for the overall company. I think we do like to see machinery or standards become fairly well developed before we really scale things up. So from our perspective, we are where we like to be. We don't necessarily have a timeline in place. But I think if the market does call upon us to scale up, I think we can be ready along with our foundry partners in this particular area. Sunny Lin: Got it. And then on this very interesting topic around HVDC, so maybe if you could share a bit more color on how OSATs or ASE could play a more important role. What's the content, let's say, between the current solution versus HVDC? Should we assume the packaging for HVDC to be more complicated and therefore, opportunity for you to expand value going forward? Kenneth Hsiang: I'm unaware of the abbreviation you used there, the H -- what did you pronounce it? Sunny Lin: High Voltage Direct Current power delivery. So basically, Yin talked about in the presentation that potentially data center could migrate to 800 volt in the coming future for better power efficiency, even less coverage. Kenneth Hsiang: I think from our -- from where we sit, just from a natural perspective, voltage conversion getting closer and closer to the die becomes ever more important. I think Yin made that -- he highlighted a couple of key points on that. And then as power efficiency becomes more and more important to not just from a cost savings perspective, from like maybe a global power what am I looking for here, kind of an eco-friendly type perspective in which AI is projected to consume nuclear reactors worth of power, I think this type of power efficiency delivery or the capability to deliver that becomes increasingly important. I think ASE is position in terms of where we sit just from a geometric scale perspective as being the bridge to these dies that are consuming a lot of power. I think being -- not being able to do the monolithic methodology of doing -- providing power into those dies makes ASE a very natural provider for such technology or electrical delivery methods or changes in those methods. So I think this is a very high opportunity for us as these products become or develop further and further. So I don't have a lot of extra information for you here, but we are working on a number of key fronts in this area. Sunny Lin: Sure. No problem. So maybe back to full process, given a lot of attention for your ramp. So would you be able to share maybe a bit more on what are the type of products or clients that you are ramping going to late 2026? Some of your competitors talk a lot on the expanding product base beyond like A accelerators going to 2026. So for your ramp on full process, would you be able to share a bit more on what are the type of products and clients that you're ramping going to the second half of next year? Kenneth Hsiang: Again, we're not -- we have not given a lot of color in terms of 2026 in terms of which products we are involved with. But we do believe our foundry partners are fairly busy overall. There are a lot of opportunities available to us given the lack of resource across the entire industry. So at this point, we don't -- again, we don't specify exactly which customers, but we do have a very wide breadth of exposure in this area. Sunny Lin: Sure. No problem. And maybe on CPO. So presentation also showcased several packing opportunities along the process from EIC and PIC stacking, FAU assembly, the packaging overall for maybe on substrates. And so how should we think about business model going to CPO? Would you say it could be multiple type of business models, just like for CoWoS and FOCoS, you could work with foundry, you could also try to ramp full process. Basically, how should we think about the opportunities going to CPO and your positioning? Kenneth Hsiang: I guess, I think from our perspective, silicon photonics is particularly, again, a very interesting area that has been highlighted. Being next to the die or interfacing to these processing units, it puts us in a very unique area in terms of being able to provide interface as part of the silicon photonics solutions. There are a number of different standards and methodologies being talked about and developed at this point in time. We don't have -- again, in terms of these types of situations, we try to be fairly agnostic. We don't try to push one or the other. We just want to be part of the endgame solution when there are volumes and when there are returns to be had. At this point, the silicon photonics revenue levels for us are still relatively small. So we're not talking a lot in terms of the end solutions that we're seeing. But when things do start ramping up in a more major way, I think we can talk about that then. And we are not -- I don't think silicon photonics in terms of a revenue perspective is a major part of the '26 outlook at this point. Sunny Lin: Sure. No problem. Maybe if I may switch gear a bit to test. So Ken, lots of expectations on you ramping, if any, on final test, especially for A accelerators in the coming few years. We all understand some engagement may take time. But could you share with us what's the latest progress on your ramp for final test for A accelerators? Kenneth Hsiang: I think from our perspective, we are seeing progress in terms of being able to do more final tests within this space. However, given the time lines in terms of how our buildings and facilities are coming ready and then time lines in which other products are going to be coming in, I think the focus right now that we have and what we're seeing is probably more geared or more focused towards wafer probe. We should see significant wafer probe expansion during '26 and then probably see a little bit more final test exposure on the AI front towards the back half of the year. But these are all subject to time line and customer products and stuff. But we are fairly excited in our overall growth opportunities within test. I think when we're -- right now, we're going to finish the year closer to an 18% to 19% range in terms of how test is part of the overall ATM revenue. A more natural percentage is probably closer to maybe a 30% or 1/3, 2/3 type relationship between testing and assembly. So we do have quite a bit of growth opportunity if we just test the products that we package. So we will continue to push that forward. And again, our overall test story is about overall test, not necessarily just focused on leading edge or maybe whatever customer that investors may be particularly interested in at this time. Sunny Lin: Sure. No problem. But I guess for wafer probe, even the foundry, the fab space is quite constrained and therefore, it's indeed releasing, increasing demand opportunities for OSAT that you have been benefiting, but I think some of your peers also seems to be benefiting from this year. So how should we think about from here? Do you think the market is big enough to accommodate multiple suppliers for wafer probe and therefore, there should be no impact on the pace for your ramp? Or would you expect maybe at some point, there may be some competitive dynamics that we need to watch? Kenneth Hsiang: I think wafer probe being the largest packager out there, we are uniquely positioned to take on more wafer probe opportunities. We also believe that our overall labor-free or maybe lights out type solutions do help contribute to this type of the cost and performance of wafer pro be for us. So we continue to expect wafer probe to continue -- to keep expanding. And as part of an overall turnkey type solution again, ASE being the largest packager, we should see -- we should be uniquely positioned again to receive wafer probe along with the final test in terms of services overall and test. Sunny Lin: Sure. Got it. No problem. So I actually got a question from an investor. It may be a bit technical. So if Ken or Yin, you could answer. So it's on power delivery. So Yin, you mentioned in your presentation that ASE is able to provide like a power suite for voltage regulators as a backside power delivery. So for that voltage regulators, would you be like sourcing from the IC manufacturers? Or would ASE be able to make in-house? Kenneth Hsiang: Why don't I pass that along to Yin. Yin, do you want to take a stab at that question? Yin Chang: Can you repeat the question one more time? Sunny Lin: Yes. So for the power delivery that you talked about the power suite, ASE is looking to offer there are regulators in the suite. So will ASE buy regulators from the others? Or would you make it in-house? Yin Chang: I think the module itself, we make it in-house. But for the PMIC, it's typically customer specified or customer custom silicon. So it's a combination, I guess. We will make the module ourselves, but the chip itself typically has been signed or bought. Sunny Lin: Okay. Sure. No problem. It's about time to wrap up. Ken, anything you want to highlight before we close? Kenneth Hsiang: I think probably the key point here in terms of -- in this presentation, I think we've talked a lot about the technical aspects of what we encounter. But I think the key point that people should remember is that as monolithic manufacturing becomes less and less capable in terms of providing these end solutions, a lot of the value of what used to be created on a single die are now spreading out into multiple die, thus having us provide more value in this particular space. So we are excited about the various opportunities that will be presented to ASE via this type of technology propagation. Sunny Lin: Sounds good. Thank you very much. Looking forward to 2026. All right. Kenneth Hsiang: All right. Thank you. Thank you very much. Yin Chang: Thank you. Sunny Lin: Bye-bye. Kenneth Hsiang: All right. Bye-bye.

AI has taken the market by storm. Its implications for the economy are already being felt in a major way.

The first half was a story of volatility driven by tariffs, while fears of an economic downturn and monetary policy are the most recent concerns facing investors according to Shana Sissel of Banríon Capital Management. Sissel thinks fear of an A.I.

The stock market finished on the cusp of record highs on Friday, led higher as the odds of another Federal Reserve rate cut looked like a foregone conclusion.

The Fed is widely expected to cut interest rates for the third meeting in a row when it meets this week.