Stay Ahead, Stay ONMINE

Building a Data Engineering Center of Excellence

As data continues to grow in importance and become more complex, the need for skilled data engineers has never been greater. But what is data engineering, and why is it so important? In this blog post, we will discuss the essential components of a functioning data engineering practice and why data engineering is becoming increasingly […]

As data continues to grow in importance and become more complex, the need for skilled data engineers has never been greater. But what is data engineering, and why is it so important? In this blog post, we will discuss the essential components of a functioning data engineering practice and why data engineering is becoming increasingly critical for businesses today, and how you can build your very own Data Engineering Center of Excellence!

I’ve had the privilege to build, manage, lead, and foster a sizeable high-performing team of data warehouse & ELT engineers for many years. With the help of my team, I have spent a considerable amount of time every year consciously planning and preparing to manage the growth of our data month-over-month and address the changing reporting and analytics needs for our 20000+ global data consumers. We built many data warehouses to store and centralize massive amounts of data generated from many OLTP sources. We’ve implemented Kimball methodology by creating star schemas both within our on-premise data warehouses and in the ones in the cloud.

The objective is to enable our user-base to perform fast analytics and reporting on the data; so our analysts’ community and business users can make accurate data-driven decisions.

It took me about three years to transform teams (plural) of data warehouse and ETL programmers into one cohesive Data Engineering team.

I have compiled some of my learnings building a global data engineering team in this post in hopes that Data professionals and leaders of all levels of technical proficiency can benefit.

Evolution of the Data Engineer

It has never been a better time to be a data engineer. Over the last decade, we have seen a massive awakening of enterprises now recognizing their data as the company’s heartbeat, making data engineering the job function that ensures accurate, current, and quality data flow to the solutions that depend on it.

Historically, the role of Data Engineers has evolved from that of data warehouse developers and the ETL/ELT developers (extract, transform and load).

The data warehouse developers are responsible for designing, building, developing, administering, and maintaining data warehouses to meet an enterprise’s reporting needs. This is done primarily via extracting data from operational and transactional systems and piping it using extract transform load methodology (ETL/ ELT) to a storage layer like a data warehouse or a data lake. The data warehouse or the data lake is where data analysts, data scientists, and business users consume data. The developers also perform transformations to conform the ingested data to a data model with aggregated data for easy analysis.

A data engineer’s prime responsibility is to produce and make data securely available for multiple consumers.

Data engineers oversee the ingestion, transformation, modeling, delivery, and movement of data through every part of an organization. Data extraction happens from many different data sources & applications. Data Engineers load the data into data warehouses and data lakes, which are transformed not just for the Data Science & predictive analytics initiatives (as everyone likes to talk about) but primarily for data analysts. Data analysts & data scientists perform operational reporting, exploratory analytics, service-level agreement (SLA) based business intelligence reports and dashboards on the catered data. In this book, we will address all of these job functions.

The role of a data engineer is to acquire, store, and aggregate data from both cloud and on-premise, new, and existing systems, with data modeling and feasible data architecture. Without the data engineers, analysts and data scientists won’t have valuable data to work with, and hence, data engineers are the first to be hired at the inception of every new data team. Based on the data and analytics tools available within an enterprise, data engineering teams’ role profiles, constructs, and approaches have several options for what should be included in their responsibilities which we will discuss in this chapter.

Data Engineering team

Software is increasingly automating the historically manual and tedious tasks of data engineers. Data processing tools and technologies have evolved massively over several years and will continue to grow. For example, cloud-based data warehouses (Snowflake, for instance) have made data storage and processing affordable and fast. Data pipeline services (like Informatica IICSApache AirflowMatillionFivetran) have turned data extraction into work that can be completed quickly and efficiently. The data engineering team should be leveraging such technologies as force multipliers, taking a consistent and cohesive approach to integration and management of enterprise data, not just relying on legacy siloed approaches to building custom data pipelines with fragile, non-performant, hard to maintain code. Continuing with the latter approach will stifle the pace of innovation within the said enterprise and force the future focus to be around managing data infrastructure issues rather than how to help generate value for your business.

The primary role of an enterprise Data Engineering team should be to transform raw data into a shape that’s ready for analysis — laying the foundation for real-world analytics and data science application.

The Data Engineering team should serve as the librarian for enterprise-level data with the responsibility to curate the organization’s data and act as a resource for those who want to make use of it, such as Reporting & Analytics teams, Data Science teams, and other groups that are doing more self-service or business group driven analytics leveraging the enterprise data platform. This team should serve as the steward of organizational knowledge, managing and refining the catalog so that analysis can be done more effectively. Let’s look at the essential responsibilities of a well-functioning Data Engineering team.

Responsibilities of a Data Engineering Team

The Data Engineering team should provide a shared capability within the enterprise that cuts across to support both the Reporting/Analytics and Data Science capabilities to provide access to clean, transformed, formatted, scalable, and secure data ready for analysis. The Data Engineering teams’ core responsibilities should include:

· Build, manage, and optimize the core data platform infrastructure

· Build and maintain custom and off-the-shelf data integrations and ingestion pipelines from a variety of structured and unstructured sources

· Manage overall data pipeline orchestration

· Manage transformation of data either before or after load of raw data through both technical processes and business logic

· Support analytics teams with design and performance optimizations of data warehouses

Data is an Enterprise Asset.

Data as an Asset should be shared and protected.

Data should be valued as an Enterprise asset, leveraged across all Business Units to enhance the company’s value to its respective customer base by accelerating decision making, and improving competitive advantage with the help of data. Good data stewardship, legal and regulatory requirements dictate that we protect the data owned from unauthorized access and disclosure.

In other words, managing Security is a crucial responsibility.

Why Create a Centralized Data Engineering Team?

Treating Data Engineering as a standard and core capability that underpins both the Analytics and Data Science capabilities will help an enterprise evolve how to approach Data and Analytics. The enterprise needs to stop vertically treating data based on the technology stack involved as we tend to see often and move to more of a horizontal approach of managing a data fabric or mesh layer that cuts across the organization and can connect to various technologies as needed drive analytic initiatives. This is a new way of thinking and working, but it can drive efficiency as the various data organizations look to scale. Additionally — there is value in creating a dedicated structure and career path for Data Engineering resources. Data engineering skill sets are in high demand in the market; therefore, hiring outside the company can be costly. Companies must enable programmers, database administrators, and software developers with a career path to gain the needed experience with the above-defined skillsets by working across technologies. Usually, forming a data engineering center of excellence or a capability center would be the first step for making such progression possible.

Challenges for creating a centralized Data Engineering Team

The centralization of the Data Engineering team as a service approach is different from how Reporting & Analytics and Data Science teams operate. It does, in principle, mean giving up some level of control of resources and establishing new processes for how these teams will collaborate and work together to deliver initiatives.

The Data Engineering team will need to demonstrate that it can effectively support the needs of both Reporting & Analytics and Data Science teams, no matter how large these teams are. Data Engineering teams must effectively prioritize workloads while ensuring they can bring the right skillsets and experience to assigned projects.

Data engineering is essential because it serves as the backbone of data-driven companies. It enables analysts to work with clean and well-organized data, necessary for deriving insights and making sound decisions. To build a functioning data engineering practice, you need the following critical components:

The Data Engineering team should be a core capability within the enterprise, but it should effectively serve as a support function involved in almost everything data-related. It should interact with the Reporting and Analytics and Data Science teams in a collaborative support role to make the entire team successful.

The Data Engineering team doesn’t create direct business value — but the value should come in making the Reporting and Analytics, and Data Science teams more productive and efficient to ensure delivery of maximum value to business stakeholders through Data & Analytics initiatives. To make that possible, the six key responsibilities within the data engineering capability center would be as follow –

Data Engineering Center of Excellence — Image by Author.

Let’s review the 6 pillars of responsibilities:

1. Determine Central Data Location for Collation and Wrangling

Understanding and having a strategy for a Data Lake.(a centralized data repository or data warehouse for the mass consumption of data for analysis). Defining requisite data tables and where they will be joined in the context of data engineering and subsequently converting raw data into digestible and valuable formats.

2. Data Ingestion and Transformation

Moving data from one or more sources to a new destination (your data lake or cloud data warehouse) where it can be stored and further analyzed and then converting data from the format of the source system to that of the destination

3. ETL/ELT Operations

Extracting, transforming, and loading data from one or more sources into a destination system to represent the data in a new context or style.

4. Data Modeling

Data modeling is an essential function of a data engineering team, granted not all data engineers excel with this capability. Formalizing relationships between data objects and business rules into a conceptual representation through understanding information system workflows, modeling required queries, designing tables, determining primary keys, and effectively utilizing data to create informed output.

I’ve seen engineers in interviews mess up more with this than coding in technical discussions. It’s essential to understand the differences between Dimensions, Facts, Aggregate tables.

5. Security and Access

Ensuring that sensitive data is protected and implementing proper authentication and authorization to reduce the risk of a data breach

6. Architecture and Administration

Defining the models, policies, and standards that administer what data is collected, where and how it is stored, and how it such data is integrated into various analytical systems.

The six pillars of responsibilities for data engineering capabilities center on the ability to determine a central data location for collation and wrangling, ingest and transform data, execute ETL/ELT operations, model data, secure access and administer an architecture. While all companies have their own specific needs with regards to these functions, it is important to ensure that your team has the necessary skillset in order to build a foundation for big data success.

Besides the Data Engineering following are the other capability centers that need to be considered within an enterprise:

Analytics Capability Center

The analytics capability center enables consistent, effective, and efficient BI, analytics, and advanced analytics capabilities across the company. Assist business functions in triaging, prioritizing, and achieving their objectives and goals through reporting, analytics, and dashboard solutions, while providing operational reports and visualizations, self-service analytics, and required tools to automate the generation of such insights.

Data Science Capability Center

The data science capability center is for exploring cutting-edge technologies and concepts to unlock new insights and opportunities, better inform employees and create a culture of prescriptive information usage using Automated AI and Automated ML solutions such as H2O.aiDataikuAible, DataRobot, C3.ai

Data Governance

The data governance office empowers users with trusted, understood, and timely data to drive effectiveness while keeping the integrity and sanctity of data in the right hands for mass consumption.


As your company grows, you will want to make sure that the data engineering capabilities are in place to support the six pillars of responsibilities. By doing this, you will be able to ensure that all aspects of data management and analysis are covered and that your data is safe and accessible by those who need it. Have you started thinking about how your company will grow? What steps have you taken to put a centralized data engineering team in place?

Thank you for reading!

Shape
Shape
Stay Ahead

Explore More Insights

Stay ahead with more perspectives on cutting-edge power, infrastructure, energy,  bitcoin and AI solutions. Explore these articles to uncover strategies and insights shaping the future of industries.

Shape

Nvidia launches research center to accelerate quantum computing breakthrough

The new research center aims to tackle quantum computing’s most significant challenges, including qubit noise reduction and the transformation of experimental quantum processors into practical devices. “By combining quantum processing units (QPUs) with state-of-the-art GPU technology, Nvidia hopes to accelerate the timeline to practical quantum computing applications,” the statement added.

Read More »

Keysight network packet brokers gain AI-powered features

The technology has matured considerably since then. Over the last five years, Singh said that most of Keysight’s NPB customers are global Fortune 500 organizations that have large network visibility practices. Meaning they deploy a lot of packet brokers with capabilities ranging anywhere from one gigabit networking at the edge,

Read More »

Adding, managing and deleting groups on Linux

$ sudo groupadd -g 1111 techs In this case, a specific group ID (1111) is being assigned. Omit the -g option to use the next available group ID (e.g., sudo groupadd techs). Once a group is added, you will find it in the /etc/group file. $ grep techs /etc/grouptechs:x:1111: Adding

Read More »

Global Oil Demand Makes Strong Start to 2025

Global oil demand has made a strong start to 2025, analysts at Standard Chartered Bank, including Commodities Research Head Paul Horsnell, said in a report sent to Rigzone by Horsnell on Thursday. “Based on a variety of national sources and the 19 March Joint Organizations Data Initiative (JODI) release, we estimate that demand averaged 102.77 million barrels per day in January, a year on year increase of 2.19 million barrels per day,” the Standard Chartered Bank analysts said in the report. “This is in line with the U.S. Energy Information Administration (EIA) estimate for January that put demand at 102.74 million barrels per day and growth at 1.85 million barrels per day,” they added. In the report, the Standard Chartered Bank analysts noted that January is usually the seasonal low point for global demand and said they expect demand “to move above 105.0 million barrels per day for the first time in June before reaching a 2025 high of 105.6 million barrels per day in August”. “Our forecast for 2025 demand growth stands at 1.41 million barrels per day. After weakening in H2-2024, our forecast is now back where it stood at its initiation in January 2024,” they added. “While the main downside risk to demand comes from U.S. tariff policy and the economic uncertainty it creates, for now demand-side fundamentals appear robust despite negative sentiment,” they continued. The Standard Chartered Bank analysts stated in the report that they expect global demand to exceed supply by 0.9 million barrels per day in the second quarter and by 0.5 million barrels per day in the third quarter. Rigzone has contacted the Trump transition team and the White House for comment on Standard Chartered Bank’s report. At the time of writing, neither have responded to Rigzone. In a research note sent to

Read More »

Voyager Midstream Acquires Phillips 66 Stake in Panola NGL Pipeline

Voyager Midstream Holdings LLC has completed the purchase of a non-operating stake in a Texas natural gas liquid (NGL) pipeline from Phillips 66. The 254-mile Panola Pipeline, operated by Enterprise Products Partners LP, transports Y-Grade NGLs from Panola County to fractionation facilities in Mont Belvieu city. “Voyager’s interest in the Panola Pipeline is a strategic fit with the company’s existing footprint in East Texas and North Louisiana”, Houston, Texas-based Voyager said in an online statement, referring to assets also acquired from Phillips 66. Voyager chief executive Will Harvey commented, “Panola Pipeline is a critical NGL pipeline connecting the major East Texas gas processing complexes and Gulf Coast demand markets”. “We are excited to work alongside our partners in Panola Pipeline to safely transport liquids to satisfy growing demand for NGLs along the Gulf Coast”, Harvey added. Voyager did not disclose the financial details of the transaction. It said that in conjunction with the acquisition, it has entered into a credit facility with the Bank of Oklahoma. “This credit facility, along with existing equity commitments from Pearl, provides Voyager with substantial flexibility and capital to continue growing its business in support of its customers”, it said. Pearl Energy Investments launched Voyager in 2023 as a platform for the acquisition and development of crude oil, natural gas and produced water infrastructure across key basins in North America. Voyager operates about 550 miles of natural gas pipelines and associated compression. It also has 400 million cubic feet a day of cryogenic gas processing capacity and 12,000 barrels per day of liquid fractionation capacity. It also operates Carthage Hub, a gas trading and delivery hub capable of handling over 1 billion cubic feet per day. Carthage Hub interconnects multiple markets across the United States including LNG markets in Texas and Louisiana. All of these

Read More »

BP makes first divestment of target $20bn asset sale

Energy firm BP has sold a stake worth $1 billion (£733m) in the Trans-Anatolian Natural Gas Pipeline (TANAP) to Apollo as part of the first tranche of a $20bn asset sale target. The US asset manager will take a 25% non-operated stake in BP Pipelines (TANAP) (BP TANAP) which itself holds BP’s 12% interest in TANAP, owner and operator of the pipeline that carries natural gas from Azerbaijan across Turkey. The sale comes after BP chief executive Murray Auchincloss unveiled plans to review assets for a potential sale including its core lubricants business, Castrol and its solar business, BP Lightsource. The $20bn target was announced alongside a “fundamental reset” for the firm as it turns focus to its traditional oil and gas production business. The deal marks the second such sale agreed with US fund manager, Apollo. Last year Apollo snapped up another $1bn BP-owned stake in Trans Adriatic Pipeline (TAP). TANAP, running for approximately 1,120 miles (1,800km) across Turkey, is the central section of the Southern Gas Corridor project (SGC) pipeline system. The SGC transports gas from the BP-operated Shah Deniz gas field in the Azerbaijan sector of the Caspian Sea to markets in Europe, including Italy and Greece. It connects to TAP at the Greek-Turkish border, which crosses Northern Greece, Albania and the Adriatic Sea before coming ashore in Southern Italy to connect to the Italian natural gas network. BP said the deal allows it to “monetise” its interest in TANAP while retaining control of the asset. BP executive vice president for gas and low carbon energy William Lin said: “This unlocks capital from our global portfolio while retaining our role in this strategic asset for bringing Azerbaijan gas to Europe. BP and Apollo will continue to explore further strategic cooperation and mutually beneficial opportunities.” Apollo partner Skardon Baker

Read More »

‘First Major Project’ for GB Energy Announced

A release posted on the UK government website on Friday announced that the “first major project” for Great British Energy (GB Energy) “is to put rooftop solar panels on around 200 schools and 200 NHS sites, saving hundreds of millions on their energy bills”. Hundreds of schools, NHS trusts and communities across the UK will benefit from new rooftop solar power and renewable schemes to save money on their energy bills, thanks to a total GBP 200 million ($258.6 million) investment from the UK government and Great British Energy, the release stated. “In England around GBP 80 million ($103.4 million) in funding will support around 200 schools, alongside GBP 100 million ($129.3 million) for nearly 200 NHS sites, covering a third of NHS trusts, to install rooftop solar panels that could power classrooms and operations, with potential to sell leftover energy back to the grid,” the release noted. The first panels are expected to be in schools and hospitals by the end of summer 2025, according to the release. The release stated that local authorities and community energy groups will also be supported by nearly GBP 12 million ($15.5 million) to help build local clean energy projects. A further GBP 9.3 million ($12.0 million) will power schemes in Scotland, Wales, and Northern Ireland including community energy or rooftop solar for public buildings, the release added. “Great British Energy’s first major project will be to help our vital public institutions save hundreds of millions on bills to reinvest on the frontline,” Energy Secretary Ed Miliband said in the release. “Great British Energy will provide power for pupils and patients,” he added. “Parents at the school gate and patients in hospitals will experience the difference Great British Energy can make. This is our clean energy superpower mission in action, with lower bills

Read More »

Raymond James Sees Biggest Crop of New Oil Projects in a Decade

A flurry of oil projects from Brazil to Saudi Arabia are set to come online this year, providing the biggest infusion of new crude production in more than a decade.  Fresh oil field output is expected to total about 2.9 million barrels a day in 2025, up from about 800,000 barrels last year, according to data from Raymond James. That’s the most in data stretching back to 2015. Among the largest projects are the Tengiz field in Kazakhstan and Bacalhau in Brazil, as well as the Berri and Marjan expansions in Saudi Arabia. The projections for this year and next are subject to delays, and could change. Global oil forecasters have been projecting a supply overhang for 2025 as countries including Guyana and Brazil bring on new output and OPEC+ plans to start reviving idled output in April. Meanwhile, US President Donald Trump’s trade policies have fanned concerns about reduced global energy demand. The US Energy Information Administration projects supply will exceed demand by 100,000 barrels a day this year, and the International Energy Agency sees a surplus of 600,000 barrels a day. While Raymond James didn’t provide full forecasts for production and consumption, the firm projects that supply will outstrip demand by 280,000 barrels a day toward the end of 2025.  “Investors have not fully grasped just how much new supply from projects is on deck in 2025,” said Pavel Molchanov, an analyst at Raymond James. What do you think? We’d love to hear from you, join the conversation on the Rigzone Energy Network. The Rigzone Energy Network is a new social experience created for you and all energy professionals to Speak Up about our industry, share knowledge, connect with peers and industry insiders and engage in a professional community that will empower your career in energy. MORE FROM THIS

Read More »

Net zero cannot be ‘in isolation’ from fossil fuels, industry chief to say

The drive for net zero cannot be “in isolation from the hydrocarbon sector”, the head of the CBI (Confederation of British Industry) will say today. Rain Newton-Smith is due to speak at the group’s annual lunch in Edinburgh, alongside First Minister John Swinney, where she will laud the oil and gas industry as “the bridge” to net zero. But the business boss will also lament government action which has hit the fossil fuels industry. The speech comes at a time when the idea of net zero is becoming unpopular with the Conservatives and a surging Reform UK – which has made opposing them a key plank of their offering to the public. “Despite the voices being raised against net zero, the fact is Scotland is sitting on a goldmine of green energy,” Newton-Smith is expected to say. “The numbers don’t lie. The opportunities are there. “Since 2022, Scotland’s net-zero sector has grown 20% and created 16,000 more jobs while average UK growth has near-flatlined. “So, let me be crystal clear. Business is behind net zero. Business is invested in our energy transition. And we’re behind the plans to go further. “But we can’t see net zero in isolation from the hydrocarbon sector. “Especially in Scotland. Oil and gas are still tens of thousands of jobs here. From the latest data it still makes up over 10% of Scotland’s GDP. “It will still be a part of the energy mix and the bridge to net zero, for some time yet. The infrastructure, the investment, the skills and knowledge of these industries will be mission critical for the transition. “But too often, they have been left out of the picture, hit by repeated tax changes and uncertainty. “On one hand, we need clear timelines and funding for net-zero commitments government has already

Read More »

PEAK:AIO adds power, density to AI storage server

There is also the fact that many people working with AI are not IT professionals, such as professors, biochemists, scientists, doctors, clinicians, and they don’t have a traditional enterprise department or a data center. “It’s run by people that wouldn’t really know, nor want to know, what storage is,” he said. While the new AI Data Server is a Dell design, PEAK:AIO has worked with Lenovo, Supermicro, and HPE as well as Dell over the past four years, offering to convert their off the shelf storage servers into hyper fast, very AI-specific, cheap, specific storage servers that work with all the protocols at Nvidia, like NVLink, along with NFS and NVMe over Fabric. It also greatly increased storage capacity by going with 61TB drives from Solidigm. SSDs from the major server vendors typically maxed out at 15TB, according to the vendor. PEAK:AIO competes with VAST, WekaIO, NetApp, Pure Storage and many others in the growing AI workload storage arena. PEAK:AIO’s AI Data Server is available now.

Read More »

SoftBank to buy Ampere for $6.5B, fueling Arm-based server market competition

SoftBank’s announcement suggests Ampere will collaborate with other SBG companies, potentially creating a powerful ecosystem of Arm-based computing solutions. This collaboration could extend to SoftBank’s numerous portfolio companies, including Korean/Japanese web giant LY Corp, ByteDance (TikTok’s parent company), and various AI startups. If SoftBank successfully steers its portfolio companies toward Ampere processors, it could accelerate the shift away from x86 architecture in data centers worldwide. Questions remain about Arm’s server strategy The acquisition, however, raises questions about how SoftBank will balance its investments in both Arm and Ampere, given their potentially competing server CPU strategies. Arm’s recent move to design and sell its own server processors to Meta signaled a major strategic shift that already put it in direct competition with its own customers, including Qualcomm and Nvidia. “In technology licensing where an entity is both provider and competitor, boundaries are typically well-defined without special preferences beyond potential first-mover advantages,” Kawoosa explained. “Arm will likely continue making independent licensing decisions that serve its broader interests rather than favoring Ampere, as the company can’t risk alienating its established high-volume customers.” Industry analysts speculate that SoftBank might position Arm to focus on custom designs for hyperscale customers while allowing Ampere to dominate the market for more standardized server processors. Alternatively, the two companies could be merged or realigned to present a unified strategy against incumbents Intel and AMD. “While Arm currently dominates processor architecture, particularly for energy-efficient designs, the landscape isn’t static,” Kawoosa added. “The semiconductor industry is approaching a potential inflection point, and we may witness fundamental disruptions in the next 3-5 years — similar to how OpenAI transformed the AI landscape. SoftBank appears to be maximizing its Arm investments while preparing for this coming paradigm shift in processor architecture.”

Read More »

Nvidia, xAI and two energy giants join genAI infrastructure initiative

The new AIP members will “further strengthen the partnership’s technology leadership as the platform seeks to invest in new and expanded AI infrastructure. Nvidia will also continue in its role as a technical advisor to AIP, leveraging its expertise in accelerated computing and AI factories to inform the deployment of next-generation AI data center infrastructure,” the group’s statement said. “Additionally, GE Vernova and NextEra Energy have agreed to collaborate with AIP to accelerate the scaling of critical and diverse energy solutions for AI data centers. GE Vernova will also work with AIP and its partners on supply chain planning and in delivering innovative and high efficiency energy solutions.” The group claimed, without offering any specifics, that it “has attracted significant capital and partner interest since its inception in September 2024, highlighting the growing demand for AI-ready data centers and power solutions.” The statement said the group will try to raise “$30 billion in capital from investors, asset owners, and corporations, which in turn will mobilize up to $100 billion in total investment potential when including debt financing.” Forrester’s Nguyen also noted that the influence of two of the new members — xAI, owned by Elon Musk, along with Nvidia — could easily help with fundraising. Musk “with his connections, he does not make small quiet moves,” Nguyen said. “As for Nvidia, they are the face of AI. Everything they do attracts attention.” Info-Tech’s Bickley said that the astronomical dollars involved in genAI investments is mind-boggling. And yet even more investment is needed — a lot more.

Read More »

IBM broadens access to Nvidia technology for enterprise AI

The IBM Storage Scale platform will support CAS and now will respond to queries using the extracted and augmented data, speeding up the communications between GPUs and storage using Nvidia BlueField-3 DPUs and Spectrum-X networking, IBM stated. The multimodal document data extraction workflow will also support Nvidia NeMo Retriever microservices. CAS will be embedded in the next update of IBM Fusion, which is planned for the second quarter of this year. Fusion simplifies the deployment and management of AI applications and works with Storage Scale, which will handle high-performance storage support for AI workloads, according to IBM. IBM Cloud instances with Nvidia GPUs In addition to the software news, IBM said its cloud customers can now use Nvidia H200 instances in the IBM Cloud environment. With increased memory bandwidth (1.4x higher than its predecessor) and capacity, the H200 Tensor Core can handle larger datasets, accelerating the training of large AI models and executing complex simulations, with high energy efficiency and low total cost of ownership, according to IBM. In addition, customers can use the power of the H200 to process large volumes of data in real time, enabling more accurate predictive analytics and data-driven decision-making, IBM stated. IBM Consulting capabilities with Nvidia Lastly, IBM Consulting is adding Nvidia Blueprint to its recently introduced AI Integration Service, which offers customers support for developing, building and running AI environments. Nvidia Blueprints offer a suite pre-validated, optimized, and documented reference architectures designed to simplify and accelerate the deployment of complex AI and data center infrastructure, according to Nvidia.  The IBM AI Integration service already supports a number of third-party systems, including Oracle, Salesforce, SAP and ServiceNow environments.

Read More »

Nvidia’s silicon photonics switches bring better power efficiency to AI data centers

Nvidia typically uses partnerships where appropriate, and the new switch design was done in collaboration with multiple vendors across different aspects, including creating the lasers, packaging, and other elements as part of the silicon photonics. Hundreds of patents were also included. Nvidia will licensing the innovations created to its partners and customers with the goal of scaling this model. Nvidia’s partner ecosystem includes TSMC, which provides advanced chip fabrication and 3D chip stacking to integrate silicon photonics into Nvidia’s hardware. Coherent, Eoptolink, Fabrinet, and Innolight are involved in the development, manufacturing, and supply of the transceivers. Additional partners include Browave, Coherent, Corning Incorporated, Fabrinet, Foxconn, Lumentum, SENKO, SPIL, Sumitomo Electric Industries, and TFC Communication. AI has transformed the way data centers are being designed. During his keynote at GTC, CEO Jensen Huang talked about the data center being the “new unit of compute,” which refers to the entire data center having to act like one massive server. That has driven compute to be primarily CPU based to being GPU centric. Now the network needs to evolve to ensure data is being fed to the GPUs at a speed they can process the data. The new co-packaged switches remove external parts, which have historically added a small amount of overhead to networking. Pre-AI this was negligible, but with AI, any slowness in the network leads to dollars being wasted.

Read More »

Critical vulnerability in AMI MegaRAC BMC allows server takeover

“In disruptive or destructive attacks, attackers can leverage the often heterogeneous environments in data centers to potentially send malicious commands to every other BMC on the same management segment, forcing all devices to continually reboot in a way that victim operators cannot stop,” the Eclypsium researchers said. “In extreme scenarios, the net impact could be indefinite, unrecoverable downtime until and unless devices are re-provisioned.” BMC vulnerabilities and misconfigurations, including hardcoded credentials, have been of interest for attackers for over a decade. In 2022, security researchers found a malicious implant dubbed iLOBleed that was likely developed by an APT group and was being deployed through vulnerabilities in HPE iLO (HPE’s Integrated Lights-Out) BMC. In 2018, a ransomware group called JungleSec used default credentials for IPMI interfaces to compromise Linux servers. And back in 2016, Intel’s Active Management Technology (AMT) Serial-over-LAN (SOL) feature which is part of Intel’s Management Engine (Intel ME), was exploited by an APT group as a covert communication channel to transfer files. OEM, server manufacturers in control of patching AMI released an advisory and patches to its OEM partners, but affected users must wait for their server manufacturers to integrate them and release firmware updates. In addition to this vulnerability, AMI also patched a flaw tracked as CVE-2024-54084 that may lead to arbitrary code execution in its AptioV UEFI implementation. HPE and Lenovo have already released updates for their products that integrate AMI’s patch for CVE-2024-54085.

Read More »

Microsoft will invest $80B in AI data centers in fiscal 2025

And Microsoft isn’t the only one that is ramping up its investments into AI-enabled data centers. Rival cloud service providers are all investing in either upgrading or opening new data centers to capture a larger chunk of business from developers and users of large language models (LLMs).  In a report published in October 2024, Bloomberg Intelligence estimated that demand for generative AI would push Microsoft, AWS, Google, Oracle, Meta, and Apple would between them devote $200 billion to capex in 2025, up from $110 billion in 2023. Microsoft is one of the biggest spenders, followed closely by Google and AWS, Bloomberg Intelligence said. Its estimate of Microsoft’s capital spending on AI, at $62.4 billion for calendar 2025, is lower than Smith’s claim that the company will invest $80 billion in the fiscal year to June 30, 2025. Both figures, though, are way higher than Microsoft’s 2020 capital expenditure of “just” $17.6 billion. The majority of the increased spending is tied to cloud services and the expansion of AI infrastructure needed to provide compute capacity for OpenAI workloads. Separately, last October Amazon CEO Andy Jassy said his company planned total capex spend of $75 billion in 2024 and even more in 2025, with much of it going to AWS, its cloud computing division.

Read More »

John Deere unveils more autonomous farm machines to address skill labor shortage

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More Self-driving tractors might be the path to self-driving cars. John Deere has revealed a new line of autonomous machines and tech across agriculture, construction and commercial landscaping. The Moline, Illinois-based John Deere has been in business for 187 years, yet it’s been a regular as a non-tech company showing off technology at the big tech trade show in Las Vegas and is back at CES 2025 with more autonomous tractors and other vehicles. This is not something we usually cover, but John Deere has a lot of data that is interesting in the big picture of tech. The message from the company is that there aren’t enough skilled farm laborers to do the work that its customers need. It’s been a challenge for most of the last two decades, said Jahmy Hindman, CTO at John Deere, in a briefing. Much of the tech will come this fall and after that. He noted that the average farmer in the U.S. is over 58 and works 12 to 18 hours a day to grow food for us. And he said the American Farm Bureau Federation estimates there are roughly 2.4 million farm jobs that need to be filled annually; and the agricultural work force continues to shrink. (This is my hint to the anti-immigration crowd). John Deere’s autonomous 9RX Tractor. Farmers can oversee it using an app. While each of these industries experiences their own set of challenges, a commonality across all is skilled labor availability. In construction, about 80% percent of contractors struggle to find skilled labor. And in commercial landscaping, 86% of landscaping business owners can’t find labor to fill open positions, he said. “They have to figure out how to do

Read More »

2025 playbook for enterprise AI success, from agents to evals

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More 2025 is poised to be a pivotal year for enterprise AI. The past year has seen rapid innovation, and this year will see the same. This has made it more critical than ever to revisit your AI strategy to stay competitive and create value for your customers. From scaling AI agents to optimizing costs, here are the five critical areas enterprises should prioritize for their AI strategy this year. 1. Agents: the next generation of automation AI agents are no longer theoretical. In 2025, they’re indispensable tools for enterprises looking to streamline operations and enhance customer interactions. Unlike traditional software, agents powered by large language models (LLMs) can make nuanced decisions, navigate complex multi-step tasks, and integrate seamlessly with tools and APIs. At the start of 2024, agents were not ready for prime time, making frustrating mistakes like hallucinating URLs. They started getting better as frontier large language models themselves improved. “Let me put it this way,” said Sam Witteveen, cofounder of Red Dragon, a company that develops agents for companies, and that recently reviewed the 48 agents it built last year. “Interestingly, the ones that we built at the start of the year, a lot of those worked way better at the end of the year just because the models got better.” Witteveen shared this in the video podcast we filmed to discuss these five big trends in detail. Models are getting better and hallucinating less, and they’re also being trained to do agentic tasks. Another feature that the model providers are researching is a way to use the LLM as a judge, and as models get cheaper (something we’ll cover below), companies can use three or more models to

Read More »

OpenAI’s red teaming innovations define new essentials for security leaders in the AI era

Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More OpenAI has taken a more aggressive approach to red teaming than its AI competitors, demonstrating its security teams’ advanced capabilities in two areas: multi-step reinforcement and external red teaming. OpenAI recently released two papers that set a new competitive standard for improving the quality, reliability and safety of AI models in these two techniques and more. The first paper, “OpenAI’s Approach to External Red Teaming for AI Models and Systems,” reports that specialized teams outside the company have proven effective in uncovering vulnerabilities that might otherwise have made it into a released model because in-house testing techniques may have missed them. In the second paper, “Diverse and Effective Red Teaming with Auto-Generated Rewards and Multi-Step Reinforcement Learning,” OpenAI introduces an automated framework that relies on iterative reinforcement learning to generate a broad spectrum of novel, wide-ranging attacks. Going all-in on red teaming pays practical, competitive dividends It’s encouraging to see competitive intensity in red teaming growing among AI companies. When Anthropic released its AI red team guidelines in June of last year, it joined AI providers including Google, Microsoft, Nvidia, OpenAI, and even the U.S.’s National Institute of Standards and Technology (NIST), which all had released red teaming frameworks. Investing heavily in red teaming yields tangible benefits for security leaders in any organization. OpenAI’s paper on external red teaming provides a detailed analysis of how the company strives to create specialized external teams that include cybersecurity and subject matter experts. The goal is to see if knowledgeable external teams can defeat models’ security perimeters and find gaps in their security, biases and controls that prompt-based testing couldn’t find. What makes OpenAI’s recent papers noteworthy is how well they define using human-in-the-middle

Read More »