Stay Ahead, Stay ONMINE

How I Became A Machine Learning Engineer (No CS Degree, No Bootcamp)

Machine learning and AI are among the most popular topics nowadays, especially within the tech space. I am fortunate enough to work and develop with these technologies every day as a machine learning engineer! In this article, I will walk you through my journey to becoming a machine learning engineer, shedding some light and advice […]

Machine learning and AI are among the most popular topics nowadays, especially within the tech space. I am fortunate enough to work and develop with these technologies every day as a machine learning engineer!

In this article, I will walk you through my journey to becoming a machine learning engineer, shedding some light and advice on how you can become one yourself!

My Background

In one of my previous articles, I extensively wrote about my journey from school to securing my first Data Science job. I recommend you check out that article, but I will summarise the key timeline here.

Pretty much everyone in my family studied some sort of STEM subject. My great-grandad was an engineer, both my grandparents studied physics, and my mum is a maths teacher.

So, my path was always paved for me.

Me at age 11

I chose to study physics at university after watching The Big Bang Theory at age 12; it’s fair to say everyone was very proud!

At school, I wasn’t dumb by any means. I was actually relatively bright, but I didn’t fully apply myself. I got decent grades, but definitely not what I was fully capable of.

I was very arrogant and thought I would do well with zero work.

I applied to top universities like Oxford and Imperial College, but given my work ethic, I was delusional thinking I had a chance. On results day, I ended up in clearing as I missed my offers. This was probably one of the saddest days of my life.

Clearing in the UK is where universities offer places to students on certain courses where they have space. It’s mainly for students who don’t have a university offer.

I was lucky enough to be offered a chance to study physics at the University of Surrey, and I went on to earn a first-class master’s degree in physics!

There is genuinely no substitute for hard work. It is a cringy cliche, but it is true!

My original plan was to do a PhD and be a full-time researcher or professor, but during my degree, I did a research year, and I just felt a career in research was not for me. Everything moved so slowly, and it didn’t seem there was much opportunity in the space.

During this time, DeepMind released their AlphaGo — The Movie documentary on YouTube, which popped up on my home feed.

From the video, I started to understand how AI worked and learn about neural networks, reinforcement learning, and deep learning. To be honest, to this day I am still not an expert in these areas.

Naturally, I dug deeper and found that a data scientist uses AI and machine learning algorithms to solve problems. I immediately wanted in and started applying for data science graduate roles.

I spent countless hours coding, taking courses, and working on projects. I applied to 300+ jobs and eventually landed my first data science graduate scheme in September 2021.

You can hear more about my journey from a podcast.

Data Science Journey

I started my career in an insurance company, where I built various supervised learning models, mainly using gradient boosted tree packages like CatBoost, XGBoost, and generalised linear models (GLMs).

I built models to predict:

  • Fraud — Did someone fraudulently make a claim to profit.
  • Risk Prices — What’s the premium we should give someone.
  • Number of Claims — How many claims will someone have.
  • Average Cost of Claim — What’s the average claim value someone will have.

I made around six models spanning the regression and classification space. I learned so much here, especially in statistics, as I worked very closely with Actuaries, so my maths knowledge was excellent.

However, due to the company’s structure and setup, it was difficult for my models to advance past the PoC stage, so I felt I lacked the “tech” side of my toolkit and understanding of how companies use machine learning in production.

After a year, my previous employer reached out to me asking if I wanted to apply to a junior data scientist role that specialises in time series forecasting and optimisation problems. I really liked the company, and after a few interviews, I was offered the job!

I worked at this company for about 2.5 years, where I became an expert in forecasting and combinatorial optimisation problems.

I developed many algorithms and deployed my models to production through AWS using software engineering best practices, such as unit testing, lower environment, shadow system, CI/CD pipelines, and much more.

Fair to say I learned a lot. 

I worked very closely with software engineers, so I picked up a lot of engineering knowledge and continued self-studying machine learning and statistics on the side.

I even earned a promotion from junior to mid-level in that time!

Transitioning To MLE

Over time, I realised the actual value of data science is using it to make live decisions. There is a good quote by Pau Labarta Bajo

ML models inside Jupyter notebooks have a business value of $0

There is no point in building a really complex and sophisticated model if it will not produce results. Seeking out that extra 0.1% accuracy by staking multiple models is often not worth it.

You are better off building something simple that you can deploy, and that will bring real financial benefit to the company.

With this in mind, I started thinking about the future of data science. In my head, there are two avenues:

  • Analytics -> You work primarily to gain insight into what the business should be doing and what it should be looking into to boost its performance.
  • Engineering -> You ship solutions (models, decision algorithms, etc.) that bring business value.

I feel the data scientist who analyses and builds PoC models will become extinct in the next few years because, as we said above, they don’t provide tangible value to a business.

That’s not to say they are entirely useless; you have to think of it from the business perspective of their return on investment. Ideally, the value you bring in should be more than your salary.

You want to say that you did “X that produced Y”, which the above two avenues allow you to do.

The engineering side was the most interesting and enjoyable for me. I genuinely enjoy coding and building stuff that benefits people, and that they can use, so naturally, that’s where I gravitated towards.

To move to the ML engineering side, I asked my line manager if I could deploy the algorithms and ML models I was building myself. I would get help from software engineers, but I would write all the production code, do my own system design, and set up the deployment process independently.

And that’s exactly what I did.

I basically became a Machine Learning Engineer. I was developing my algorithms and then shipping them to production.

I also took NeetCode’s data structures and algorithms course to improve my fundamentals of computer science and started blogging about software engineering concepts.

Coincidentally, my current employer contacted me around this time and asked if I wanted to apply for a machine learning engineer role that specialises in general ML and optimisation at their company!

Call it luck, but clearly, the universe was telling me something. After several interview rounds, I was offered the role, and I am now a fully fledged machine learning engineer!

Fortunately, a role kind of “fell to me,” but I created my own luck through up-skilling and documenting my learning. That is why I always tell people to show their work — you don’t know what may come from it.

My Advice

I want to share the main bits of advice that helped me transition from a machine learning engineer to a data scientist.

  • Experience — A machine learning engineer is not an entry-level position in my opinion. You need to be well-versed in data science, machine learning, software engineering, etc. You don’t need to be an expert in all of them, but have good fundamentals across the board. That’s why I recommend having a couple of years of experience as either a software engineer or data scientist and self-study other areas.
  • Production Code — If you are from data science, you must learn to write good, well-tested production code. You must know things like typing, linting, unit tests, formatting, mocking and CI/CD. It’s not too difficult, but it just requires some practice. I recommend asking your current company to work with software engineers to gain this knowledge, it worked for me!
  • Cloud Systems — Most companies nowadays deploy many of their architecture and systems on the cloud, and machine learning models are no exception. So, it’s best to get practice with these tools and understand how they enable models to go live. I learned most of this on the job, to be honest, but there are courses you can take.
  • Command Line — I am sure most of you know this already, but every tech professional should be proficient in the command line. You will use it extensively when deploying and writing production code. I have a basic guide you can checkout here.
  • Data Structures & Algorithms — Understanding the fundamental algorithms in computer science are very useful for MLE roles. Mainly because you will likely be asked about it in interviews. It’s not too hard to learn compared to machine learning; it just takes time. Any course will do the trick.
  • Git & GitHub — Again, most tech professionals should know Git, but as an MLE, it is essential. How to squash commits, do code reviews, and write outstanding pull requests are musts.
  • Specialise — Many MLE roles I saw required you to have some specialisation in a particular area. I specialise in time series forecasting, optimisation, and general ML based on my previous experience. This helps you stand out in the market, and most companies are looking for specialists nowadays.

The main theme here is that I basically up-skilled my software engineering abilities. This makes sense as I already had all the math, stats, and machine learning knowledge from being a data scientist.

If I were a software engineer, the transition would likely be the reverse. This is why securing a machine learning engineer role can be quite challenging, as it requires proficiency across a wide range of skills.

Summary & Further Thoughts

I have a free newsletter, Dishing the Data, where I share weekly tips and advice as a practising data scientist. Plus, when you subscribe, you will get my FREE data science resume and short PDF version of my AI roadmap!

Connect With Me

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

AWS stealthily raises GPU prices by 15 percent

Amazon Web Services (AWS) raised the prices of its GPU instances for machine learning by around 15 percent this weekend, without warning, reports The Register. The price increase applies in particular to EC2 Capacity Blocks for ML, where, for example, the cost of the p5e.48xlarge instance rose from $ 34.61

Read More »

Iberdrola Ups Dividend after Reaching $146B Capitalization

Iberdrola SA on Thursday declared an interim dividend of EUR 0.253 ($0.29) per share for 2025 results, up from the minimum of EUR 0.25 it announced October. Earlier the Spanish power and gas utility said it reached EUR 125 billion ($145.55 billion) in stock market value at the start of 2026, having increased its capitalization by nearly 40 percent in 2025. “The company is once again offering its shareholders three options in this edition of Iberdrola Flexible Remuneration: to receive the interim dividend amount in cash; to sell their allocation rights on the market; or to obtain new bonus shares from the group free of charge”, it said in an online statement, adding the options can be combined. Shareholders who opt for cash are to receive the interim dividend February 2. “Shareholders who opt to receive new shares must have 73 free allocation rights to receive a new share in the company”, Iberdrola said. The dividend announced Thursday would be backed by a supplementary dividend Iberdrola plans to pay in July if approved at its general shareholders’ meeting, it said. “In order to implement this new edition of the remuneration system, a capital increase with a maximum reference market value of EUR 1.713 billion will be carried out”, it said. Iberdrola said Tuesday it is now the top utility in Europe by market capitalization and the second-biggest in the world. It noted the milestone was achieved in the year marking the 125th anniversary of its founding as Hidroeléctrica Ibérica. According to its latest quarterly report, Iberdrola produced 96,047 gigawatt hours (GWh) net in the first nine months of 2025, with renewable energy accounting for 66,254 GWh. Spain led geographically, accounting for 48,794 GWh of Iberdrola’s total net production in the period. It was followed by the United States (18,436 GWh). Mexico was

Read More »

Shell Expects Weak Oil Trading Result for Q4

Shell Plc said its oil trading performance worsened in the fourth quarter as crude prices slumped, adding to signs that Big Oil is heading into a tougher earnings season.  Oil trading results for the year’s final three months are “expected to be significantly lower” than the previous quarter, Shell said in an update on Thursday, ahead of an earnings report due early next month. At the company’s troubled chemicals division, a “significant loss” is expected. The update comes at a time when the oil market is lurching into an oversupply that could make for challenging trading conditions in the months ahead. The international benchmark Brent plunged 18 percent last year and has been largely unaffected by turmoil in Venezuela, where the country’s president Nicolás Maduro has been captured by US forces. It’s a “rough end to the year” for Shell, RBC Capital Markets analyst Biraj Borkhataria said Thursday in a note. He had expected “a relatively weak quarter, and this looks worse than expected.” Shell’s shares fell as much as 2.3 percent in early London trading. Shell’s massive in-house trading business deals in oil, gas, fuels, chemicals and renewable power – trading both the company’s own production as well as supply from third parties. The energy major doesn’t disclose separate results for its traders, but the performance is closely watched as it can be a key driver of earnings. A strong trading performance in the third quarter was one of the reasons Shell cited for earnings that beat estimates.  Since taking over the London-based energy giant three years ago, Chief Executive Officer Wael Sawan has sought to cut costs and offload under-performing assets to improve the company’s balance sheet. This is his first test in a lower oil-price environment that will pressure the firm’s ability to maintain its level of share buybacks. US rival Exxon Mobil

Read More »

National Grid Unveils New Plan for Anglo-Dutch Cable Link

National Grid PLC on Thursday announced route changes for the proposed LionLink cable to the Netherlands, a project with TenneT BV. The interconnector is designed to carry up to two gigawatts of wind electricity, enough for about 2.5 million British homes, according to National Grid. LionLink would connect a wind farm offshore the Netherlands to the Dutch and UK grids, with a targeted start of operation in 2032, National Grid says on the project webpage. The power transmission and distribution operator said in an online statement Thursday it would launch an eight-week public consultation for its new plan for the cable to start underground in Suffolk’s Walberswick, “a decision made following an assessment of the environment and local residents’ concerns around access constraints and traffic impacts”. “An alternative underground HVDC [high-voltage direct current] cable corridor to the north of Southwold was discounted following the consultations”, National Grid said. “NGV is also working closely with local authorities to ensure no construction takes place on the beach, and there is no visible infrastructure once the project is complete”, it added, referring to National Grid Ventures, its unit tasked with building and operating LionLink. “84 percent of the UK section of the LionLink cable will be offshore, and all onshore sections will be buried underground”. The new plan was based on non-statutory consultations in 2022 and 2023, LionLink project director Gareth Burden said. “We are coordinating with other developers in Suffolk on a regular basis so that where possible, we can work together to ensure construction is carried out in manageable sections, and we can avoid long-term disruption in any one area”, Burden added. National Grid noted, “LionLink is set to be one of the first projects of its kind, helping to shape the future of offshore renewable energy by combining wind generation and

Read More »

Oil Prices Jump as Short Covering Builds

Oil moved higher as traders digested a mix of geopolitical risks that could add a premium to prices while continuing to assess US measures to exert control over Venezuela’s oil. West Texas Intermediate rose 3.2% to settle below $58 a barrel. Prices continued to climb after settlement, rising more than 1% and leaving the market poised to wipe out losses from earlier in the week. President Donald Trump threatened to hit Iran “hard” if the country’s government killed protesters amid an ongoing period of unrest. A disruption to Iranian supply would prove an unexpected hurdle in a market that’s currently anticipating a glut of oil. Adding to the bullish momentum, an annual period of commodity index rebalancing is expected to see cash flow back into crude over the next few days. Call skews for Brent have also strengthened as traders pile into the options market to hedge. And entering the day, trend-following commodity trading advisers were 91% short in WTI, according to data from Kpler’s Bridgeton Research group. That positioning can leave traders rushing to cover shorts in the event of a price spike. The confluence of bullish events arrived as traders were weighing the US’s efforts to control the Venezuelan oil industry. Energy Secretary Chris Wright said the US plans to control sales of Venezuelan oil and would initially offer stored crude, while the Energy Department said barrels already were being marketed. State-owned Petroleos de Venezuela SA said it’s in negotiations with Washington over selling crude through a framework similar to an arrangement with Chevron Corp., the only supermajor operating in the country. Meanwhile, President Donald Trump told the New York Times that US oversight of the country could last years and that “the oil will take a while.” “We are really talking about a trade-flow shift as the

Read More »

Survey Shows OPEC Held Supply Flat Last Month

OPEC’s crude production held steady in December as a slump in Venezuela’s output to the lowest in two years was offset by increases in Iraq and some other members, a Bloomberg survey showed.  The Organization of the Petroleum Exporting Countries pumped an average of just over 29 million barrels a day, little changed from the previous month, according to the survey. Venezuelan output declined by about 14% to 830,000 barrels a day as the US blocked and seized tankers as part of a strategy to pressure the country’s leadership. Supplies increased from Iraq and a few other nations as they pressed on with the last in a series of collective increases before a planned pause in the first quarter of this year. The alliance, led by Saudi Arabia, aims to keep output steady through the end of March while global oil markets confront a surplus. World markets have been buffeted this week after President Donald Trump’s administration captured Venezuelan leader Nicolás Maduro, and said it would assume control of the OPEC member’s oil exports indefinitely.  While Trump has said that US oil companies will invest billions of dollars to rebuild Venezuela’s crumbling energy infrastructure, the nation’s situation in the short term remains precarious. Last month, Caracas was forced to shutter wells at the oil-rich Orinoco Belt amid the American blockade.  The shock move is the latest in an array of geopolitical challenges confronting the broader OPEC+ coalition, ranging from forecasts of a record supply glut to unrest in Iran and Russia’s ongoing war against Ukraine, which is taking a toll on the oil exports of fellow alliance member Kazakhstan. Oil prices are trading near the lowest in five years at just over $60 a barrel in London, squeezing the finances of OPEC+ members. Amid the uncertain backdrop, eight key nations agreed again this month to freeze output levels during the first quarter,

Read More »

Utilities under pressure: 6 power sector trends to watch in 2026

Listen to the article 10 min This audio is auto-generated. Please let us know if you have feedback. 2026 will be a year of reckoning for the electric power industry.  Major policy changes in the One Big Beautiful Bill Act, which axed most subsidies for clean energy and electric vehicles, are forcing utilities, manufacturers, developers and others to pivot fast. The impacts of those changes will become more pronounced over the coming months. Market forces will also have their say. Demand for power has never been greater. But some of the most aggressive predictions driving resource planning may not come to pass, leading some to fear the possibility of another tech bubble. At the same time, each passing day brings more distributed energy resources onto the grid, increasing the opportunities — and expectations — for utilities to harness those resources into a more dynamic, flexible and resilient system. Here are some of the top trends Utility Dive will be tracking over the coming year. Large loads — where are they, and who controls their interconnection — dominate industry concerns Across the United States, but particularly in markets like Texas and the Mid-Atlantic, large loads — mainly data centers designed to run artificial intelligence programs — are seeking to connect to the grid, driving up electricity demand forecasts and ballooning interconnection queues. That’s led some states to introduce new large load tariffs to weed out speculative requests, with more states expected to follow suit.  The Department of Energy is now pushing federal regulators to take a more active role in regulating how those loads get connected to the grid, setting the stage for a power struggle between state and federal authorities. The DOE asked the Federal Energy Regulatory Commission to issue rules by April 30, a deadline many say will be hard to meet. A

Read More »

JLL’s 2026 Global Data Center Outlook: Navigating the AI Supercycle, Power Scarcity and Structural Market Transformation

Sovereign AI and National Infrastructure Policy JLL frames artificial intelligence infrastructure as an emerging national strategic asset, with sovereign AI initiatives representing an estimated $8 billion in cumulative capital expenditure by 2030. While modest relative to hyperscale investment totals, this segment carries outsized strategic importance. Data localization mandates, evolving AI regulation, and national security considerations are increasingly driving governments to prioritize domestic compute capacity, often with pricing premiums reaching as high as 60%. Examples cited across Europe, the Middle East, North America, and Asia underscore a consistent pattern: digital sovereignty is no longer an abstract policy goal, but a concrete driver of data center siting, ownership structures, and financing models. In practice, sovereign AI initiatives are accelerating demand for locally controlled infrastructure, influencing where capital is deployed and how assets are underwritten. For developers and investors, this shift introduces a distinct set of considerations. Sovereign projects tend to favor jurisdictional alignment, long-term tenancy, and enhanced security requirements, while also benefiting from regulatory tailwinds and, in some cases, direct state involvement. As AI capabilities become more tightly linked to economic competitiveness and national resilience, policy-driven demand is likely to remain a durable (if specialized) component of global data center growth. Energy and Sustainability as the Central Constraint Energy availability emerges as the report’s dominant structural constraint. In many major markets, average grid interconnection timelines now extend beyond four years, effectively decoupling data center development schedules from traditional utility planning cycles. As a result, operators are increasingly pursuing alternative energy strategies to maintain project momentum, including: Behind-the-meter generation Expanded use of natural gas, particularly in the United States Private-wire renewable energy projects Battery energy storage systems (BESS) JLL points to declining battery costs, seen falling below $90 per kilowatt-hour in select deployments, as a meaningful enabler of grid flexibility, renewable firming, and

Read More »

SoftBank, DigitalBridge, and Stargate: The Next Phase of OpenAI’s Infrastructure Strategy

OpenAI framed Stargate as an AI infrastructure platform; a mechanism to secure long-duration, frontier-scale compute across both training and inference by coordinating capital, land, power, and supply chain with major partners. When OpenAI announced Stargate in January 2025, the headline commitment was explicit: an intention to invest up to $500 billion over four to five years to build new AI infrastructure in the U.S., with $100 billion targeted for near-term deployment. The strategic backdrop in 2025 was straightforward. OpenAI’s model roadmap—larger models, more agents, expanded multimodality, and rising enterprise workloads—was driving a compute curve increasingly difficult to satisfy through conventional cloud procurement alone. Stargate emerged as a form of “control plane” for: Capacity ownership and priority access, rather than simply renting GPUs. Power-first site selection, encompassing grid interconnects, generation, water access, and permitting. A broader partner ecosystem beyond Microsoft, while still maintaining a working relationship with Microsoft for cloud capacity where appropriate. 2025 Progress: From Launch to Portfolio Buildout January 2025: Stargate Launches as a National-Scale Initiative OpenAI publicly launched Project Stargate on Jan. 21, 2025, positioning it as a national-scale AI infrastructure initiative. At this early stage, the work was less about construction and more about establishing governance, aligning partners, and shaping a public narrative in which compute was framed as “industrial policy meets real estate meets energy,” rather than simply an exercise in buying more GPUs. July 2025: Oracle Partnership Anchors a 4.5-GW Capacity Step On July 22, 2025, OpenAI announced that Stargate had advanced through a partnership with Oracle to develop 4.5 gigawatts of additional U.S. data center capacity. The scale of the commitment marked a clear transition from conceptual ambition to site- and megawatt-level planning. A figure of this magnitude reshaped the narrative. At 4.5 GW, Stargate forced alignment across transformers, transmission upgrades, switchgear, long-lead cooling

Read More »

Lenovo unveils purpose-built AI inferencing servers

There is also the Lenovo ThinkSystem SR650i, which offers high-density GPU computing power for faster AI inference and is intended for easy installation in existing data centers to work with existing systems. Finally, there is the Lenovo ThinkEdge SE455i for smaller, edge locations such as retail outlets, telecom sites, and industrial facilities. Its compact design allows for low-latency AI inference close to where data is generated and is rugged enough to operate in temperatures ranging from -5°C to 55°C. All of the servers include Lenovo’s Neptune air- and liquid-cooling technology and are available through the TruScale pay-as-you-go pricing model. In addition to the new hardware, Lenovo introduced new AI Advisory Services with AI Factory Integration. This service gives access to professionals for identifying, deploying, and managing best-fit AI Inferencing servers. It also launched Premier Support Plus, a service that gives professional assistance in data center management, freeing up IT resources for more important projects.

Read More »

Samsung warns of memory shortages driving industry-wide price surge in 2026

SK Hynix reported during its October earnings call that its HBM, DRAM, and NAND capacity is “essentially sold out” for 2026, while Micron recently exited the consumer memory market entirely to focus on enterprise and AI customers. Enterprise hardware costs surge The supply constraints have translated directly into sharp price increases across enterprise hardware. Samsung raised prices for 32GB DDR5 modules to $239 from $149 in September, a 60% increase, while contract pricing for DDR5 has surged more than 100%, reaching $19.50 per unit compared to around $7 earlier in 2025. DRAM prices have already risen approximately 50% year to date and are expected to climb another 30% in Q4 2025, followed by an additional 20% in early 2026, according to Counterpoint Research. The firm projected that DDR5 64GB RDIMM modules, widely used in enterprise data centers, could cost twice as much by the end of 2026 as they did in early 2025. Gartner forecast DRAM prices to increase by 47% in 2026 due to significant undersupply in both traditional and legacy DRAM markets, Chauhan said. Procurement leverage shifts to hyperscalers The pricing pressures and supply constraints are reshaping the power dynamics in enterprise procurement. For enterprise procurement, supplier size no longer guarantees stability. “As supply becomes more contested in 2026, procurement leverage will hinge less on volume and more on strategic alignment,” Rawat said. Hyperscale cloud providers secure supply through long-term commitments, capacity reservations, and direct fab investments, obtaining lower costs and assured availability. Mid-market firms rely on shorter contracts and spot sourcing, competing for residual capacity after large buyers claim priority supply.

Read More »

Eight Trends That Will Shape the Data Center Industry in 2026

For much of the past decade, the data center industry has been able to speak in broad strokes. Growth was strong. Demand was durable. Power was assumed to arrive eventually. And “the data center” could still be discussed as a single, increasingly important, but largely invisible, piece of digital infrastructure. That era is ending. As the industry heads into 2026, the dominant forces shaping data center development are no longer additive. They are interlocking and increasingly unforgiving. AI drives density. Density drives cooling. Cooling and density drive power. Power drives site selection, timelines, capital structure, and public response. And once those forces converge, they pull the industry into places it has not always had to operate comfortably: utility planning rooms, regulatory hearings, capital committee debates, and community negotiations. The throughline of this year’s forecast is clarity: Clarity about workload classes. Clarity about physics. Clarity about risk. And clarity about where the industry’s assumptions may no longer hold. One of the most important shifts entering 2026 is that it may increasingly no longer be accurate, or useful, to talk about “data centers” as a single category. What public discourse often lumps together now conceals two very different realities: AI factories built around sustained, power-dense GPU utilization, and general-purpose data centers supporting a far more elastic mix of cloud, enterprise, storage, and interconnection workloads. That distinction is no longer academic. It is shaping how projects are financed, how power is delivered, how facilities are cooled, and how communities respond. It’s also worth qualifying a line we’ve used before, and still stand by in spirit: that every data center is becoming an AI data center. In 2026, we feel that statement is best understood more as a trajectory, and less a design brief. AI is now embedded across the data center stack: in

Read More »

Data Center Jobs: Engineering, Construction, Commissioning, Sales, Field Service and Facility Tech Jobs Available in Major Data Center Hotspots

Each month Data Center Frontier, in partnership with Pkaza, posts some of the hottest data center career opportunities in the market. Here’s a look at some of the latest data center jobs posted on the Data Center Frontier jobs board, powered by Pkaza Critical Facilities Recruiting. Looking for Data Center Candidates? Check out Pkaza’s Active Candidate / Featured Candidate Hotlist Data Center Facility Technician (All Shifts Available)Impact, TX This position is also available in: Ashburn, VA; Abilene, TX; Needham, MA; Lyndhurst, NJ; Philadelphia, PA; Atlantic City, NJ or New York, NY. Navy Nuke / Military Vets leaving service accepted!  This opportunity is working with a leading mission-critical data center provider. This firm provides data center solutions custom-fit to the requirements of their client’s mission-critical operational facilities. They provide reliability of mission-critical facilities for many of the world’s largest organizations facilities supporting enterprise clients, colo providers and hyperscale companies. This opportunity provides a career-growth minded role with exciting projects with leading-edge technology and innovation as well as competitive salaries and benefits. Electrical Commissioning EngineerAshburn, VA This traveling position is also available in: New York, NY; White Plains, NY;  Richmond, VA; Montvale, NJ; Charlotte, NC; Atlanta, GA; Hampton, GA; New Albany, OH; Cedar Rapids, IA; Phoenix, AZ; Salt Lake City, UT; Dallas, TX or Chicago, IL. *** ALSO looking for a LEAD EE and ME CxA Agents and CxA PMs *** Our client is an engineering design and commissioning company that has a national footprint and specializes in MEP critical facilities design. They provide design, commissioning, consulting and management expertise in the critical facilities space. They have a mindset to provide reliability, energy efficiency, sustainable design and LEED expertise when providing these consulting services for enterprise, colocation and hyperscale companies. This career-growth minded opportunity offers exciting projects with leading-edge technology and innovation as well as

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »