Introduction

Software project failures are a harsh reality in the world of technology. Despite the best intentions and efforts, projects can unravel due to various reasons, such as poor estimation and planning, inadequate requirements gathering, scope creep, and unrealistic timelines. These failures not only result in financial losses but also tarnish a company’s reputation and erode stakeholder trust. Addressing project failures requires a proactive approach, emphasizing communication, risk management, continuous evaluation and especially realistic estimation and planning. Embracing these lessons can lead to improved project outcomes and foster a culture of learning and growth in the software development industry.

In the dynamic world of software development, accurate cost estimation is crucial to ensure project success. Organizations rely on dependable software cost estimation practices to manage budgets, meet deadlines, and deliver quality products. To address this need, a new Software Cost Estimation Certification has emerged, complemented by the Cost Estimation Body of Knowledge for Software (CEBoK-S). In this blog, we will delve into the significance of this certification and the CEBoK-S, shedding light on how they empower professionals to excel in the field of software cost estimation.

The New Software Cost Estimation Certification (SCEC)

The new Software Cost Estimation Certification is a comprehensive program designed to equip professionals with the latest tools, methodologies, and best practices for accurately estimating software project costs. Offered by the International Cost Estimation and Analysis Association (ICEAA), its special interest group ICEAA-Software, this certification reflects the industry’s evolving demands and ensures that participants stay up to date with the latest trends.

Key Components:

  1. Advanced Estimation Techniques: The certification program covers a wide array of advanced estimation techniques, from traditional methods like function point analysis and COCOMO to modern approaches like agile estimation and parametric modelling. By learning these techniques, professionals gain the flexibility to adapt their approach to diverse project requirements.
  2. Risk Assessment and Mitigation: Effective cost estimation involves identifying potential risks and uncertainties that can impact the project’s outcome. The certification equips participants with the skills to assess and mitigate risks, allowing for better planning and resource allocation.
  3. Industry Case Studies: Real-world case studies are an integral part of the certification program. These case studies provide valuable insights into how cost estimation principles are applied in various scenarios, offering participants a practical understanding of the challenges they may encounter.

The CEBoK-S – Cost Estimation Body of Knowledge for Software

The CEBoK-S is a comprehensive guide that provides a structured framework for software cost estimation. Developed by industry experts, this body of knowledge encompasses a wide range of topics, from fundamental concepts to advanced practices, creating a solid foundation for professionals in the field.

Key Features:

  1. Detailed Framework: The CEBoK-S offers a detailed framework that covers all aspects of software cost estimation. It defines the key processes, activities, and inputs required for accurate estimation, guiding professionals through the entire estimation lifecycle.
  2. Best Practices and Standards: In an ever-changing industry, adhering to best practices and standards is crucial. The CEBoK-S outlines established industry standards, ensuring consistency and reliability in cost estimation practices across projects and organizations.
  3. Continuous Updates: Software development is continually evolving, and the CEBoK-S keeps pace with these changes. It undergoes regular updates to reflect the latest advancements and emerging trends in the field, making it a reliable and relevant resource for professionals.

Impact on the Software Industry

The combination of the new Software Cost Estimation Certification and the CEBoK-S has revolutionized the software industry’s approach to cost estimation. Certified professionals armed with the knowledge from the CEBoK-S are better equipped to address the challenges posed by modern software projects, leading to improved project outcomes and client satisfaction.

  1. Enhanced Project Planning: The comprehensive knowledge gained from the certification and the CEBoK-S enables professionals to create accurate and realistic project plans. This, in turn, leads to better resource allocation, reduced budget overruns, and timely project deliveries.
  2. Quality and Consistency: Employing standardized cost estimation practices ensures consistency in project management across different teams and organizations. This leads to higher-quality software development, as well as improved collaboration and communication among stakeholders.
  3. Improved Stakeholder Trust: Clients and stakeholders place their trust in organizations that employ certified professionals and follow industry standards. The certification acts as a testament to an organization’s commitment to excellence and professionalism.
  4. Higher success rates of software development projects, resulting in fewer cost and schedule overruns. This potentially saves companies huge amounts of money and reputation damage.

Conclusion

In conclusion, the new Software Cost Estimation Certification and the CEBoK-S are instrumental in equipping professionals with the knowledge and skills required to excel in software cost estimation. By combining advanced estimation techniques with a structured body of knowledge, these resources elevate the industry’s cost estimation practices to new heights. As organizations continue to embrace these certifications, we can expect to see more successful projects, satisfied clients, and a stronger, more reliable software industry overall.

IDC Metri is proud to announce that its Software Cost Estimation Center of Excellence now has two Software Cost Estimation Certified professionals: Frank Vogelezang and Harold van Heeringen. More information can be found here: https://qa.idc.com/eu/idcmetri/it-intelligence

On May 24, AMD revealed its new Radeon RX 7600 graphics card. This is an entry-level card positioned to play the newest games at 60+ frames per second (fps) at 1080p. It supports very efficient streaming using the latest AV1 encoding technology. According to AMD, the card performs 1080p gaming 29% faster on average than the AMD Radeon RX 6600.

AMD’s latest RDNA 3 generation of cards have marked ray tracing improvements over the previous RDNA 2 versions. Our tests show that the Radeon RX 7600 can get close to the performance of the Radeon RX 6700 XT midrange card in ray tracing benchmarks such as Speedway and Port Royal. The RX 7600 achieved around 86% of the performance of the midrange card in both tests using default driver settings.

The Radeon RX 7600 is based on the AMD RDNA 3 architecture and includes revamped compute units with unified ray tracing and AI accelerators. It features AMD’s Infinity Cache technology from the second generation of cards.

The Test Platform

The AMD Ryzen 5 7600X processor, the Radeon RX 7600 graphics card, the GIGABYTE X670E Aorus Master motherboard, and a G.SKILL Trident Z5 Neo 2x16GB DDR5-6000 EXPO memory kit — which were all provided to IDC by AMD — comprised the test PC hardware components. The primary Windows 11 disk was a 1TB GIGABYTE Aorus NVMe Gen4 solid state drive.

A be quiet! Silent Loop 2 280mm water cooler was fitted for the processor, which was coupled with a be quiet! STRAIGHT POWER 11 Platinum 850W power supply. A 34” Dell Gaming S3422DWG monitor — a quad-HD 3440*1440 display with a 144Hz frame rate, FreeSync, 10-bit colors, and high dynamic range functionality — was also used.

The reviewers utilized the motherboard’s optimal default settings, set the memory profile to EXPO 6000, and made sure that smart access memory was enabled. No special tuning, optimization, or overclocking was carried out for the tests.

Synthetic Benchmarks and Productivity Performance

Blender Benchmark 3.5.0 was used to evaluate the graphics card’s rendering performance. The Radeon RX 7600 ranked in the top 29% of all benchmarks, thanks to the Heterogeneous Interface for Portability — AMD’s compute language for GPUs utilized by Blender Benchmark (as opposed to OpenCL, which does not utilize it). A far quicker result than expected was delivered. This is good news for gamers who do light personal and family photo editing or enhance pictures for social media posts.

The system’s 3DMark Time Spy score of 10,557 was better than 60% of all results, which is respectable for an entry-level gaming machine.

Gaming Performance

Various old and new video games were tested on the platform, including next-gen versions.

Shadow of the Tomb Raider

This game averaged 134fps at 1080p with the maximum graphics settings and AMD’s FidelityFX Contrast Adaptive Sharpening enabled. With ray traced shadow enabled at high settings, the game ran at an average 77fps with a low of 53fps. Increasing the quality of the ray traced shadow to extreme resulted in an average 70fps and a minimum of 43fps.

Far Cry 6

This game averaged 118fps at the 1080p high graphics quality setting, registering a minimum of 98fps. During testing, all DirectX Ray tracing (DXR) and FidelityFX Super Resolution (FSR) capabilities were activated. Increasing the graphics settings to ultra quality resulted in an average 99fps and a minimum of 85fps.

Cyberpunk 2077

At 1080p, this game averaged 37fps with a minimum of 22fps. Ultra ray tracing presets and FSR 2.1 capabilities were activated automatically. The game performed at an average 50fps and a minimum of 35fps using the medium ray tracing setting, resulting in a much smoother experience.

The Witcher 3: Wild Hunt Next-Gen

This game averaged 38fps at 1080p, with a minimum of 26fps. Ultra ray tracing presets and FSR 2.1 capabilities were activated automatically. The game functioned significantly better at the medium ray tracing setting, clocking an average 57fps and a minimum 46fps. Without ray tracing, rasterization performance averaged 104fps and registered a minimum of 76fps in extreme settings.

Frequency, Power Consumption, Temperature, and Noise

The RX 7600 operated at an average frequency of 2545MHz, consumed 160W of power, and attained an average temperature of 79C when playing The Witcher 3 in ultra ray tracing mode, with the GPU loaded to 99%. Due to their small size and low revolutions per minute, the two 90mm fans kept the card cool and noiseless.

Final Words and Conclusion

According to IDC’s monitor tracker, about two-thirds of new monitors still have a max resolution of 1080p. There is a massive installed base of such monitors. Not every customer with full HD aspirations is seeking the best and most costly gear. For example, Minecraft and Roblox are popular among youngsters, while Fortnite in performance mode is popular among teens. Such groups will be very delighted with a PC powered by the RX 7600, and their parents will not have to seek a loan to build it!

AMD faces increased competition now that Intel has entered the arena, alongside Nvidia and AMD. Difficult macroeconomic conditions — ranging from inflation to a war on the ground in Europe — are reducing consumer purchasing power. However, AMD has wisely evaluated the market conditions and taken quick and clever measures to adjust, such as reducing the proposed end-user price of the Radeon RX 7600 from an anticipated $299 to $269! AMD has also reduced the prices of its previous generation RDNA 2-based RX 6000 series cards, thereby providing gamers and customers with a wide selection of goods at various price points.

In conclusion, there is a lot to like about the AMD Radeon RX 7600. It is an affordable, sleek, and compact dual slot, dual fan graphics card that delivers impressive 1080p gaming performance at 50+fps on the highest graphical settings with FSR and ray tracing enabled.

Mohamed Hakam Hefny - Senior Program Manager - IDC

Mohamed Hefny leads market research in EMEA on professional workstation PCs and solutions. He also reports on professional computing semiconductors, processors, and accelerators (CPUs and GPUs), as well as breakthroughs and trends related to the market. In addition, Mohamed is actively involved in AI PC taxonomy and research. He participates in business development projects, contributes to consulting activities, and provides IDC customers with analysis, opinions, and advice.

Generative AI is a fascinating topic and has emerged as a powerful technology that pushes the boundaries of what computation can accomplish.

It has the potential to transform the realms of art and creativity, but also revolutionise industry processes.

There are a myriad use cases of generative AI across industries. We can see that different industries are adopting the technology to achieve specific business outcomes or address common challenges every organisation faces.

With its ability to generate content autonomously and simulate human-like outputs, generative AI has found applications in all industries. In fields as diverse as marketing, customer experience, citizen engagement, as well as industry-specific processes, such as supply chain management automation in manufacturing, for instance.

We would like to start diving into the use cases that are commonly used by several industries.

One of the first use cases to be adopted by organisations are conversational applications. They can range from virtual assistants and chatbots to language translation to personalised recommendations.

Another use case spanning across industries is in marketing applications, which can be widely adopted, depending on the sensitivity of the customer/citizen/patient data and the industry appetite for online marketing. For example, social media automation, customer support via chatbots and personalised marketing campaigns can be used to enhance the visibility of the organisation while being more efficient in their marketing investments.

A third use case cutting across industries is knowledge management applications. This use case can be seen in organisations being applied in identifying existing knowledge, knowledge summarisation, and in language translation and geographic contextualisation.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

However, industries adopt technologies based on their specific needs, goals, and customer demands. Unique processes, regulations, and market dynamics require tailored technologies, and it will be no different with generative AI.

Diverse industry requirements, resource constraints, competition, and technological maturity stages drive varying technology adoption across organisations. Now we’d like to explore how several industries are approaching generative AI and the technology adoption patterns of each industry:

Finance

In the ever-evolving landscape of the financial services industry, the emergence of generative AI technologies, led by Open AI’s ChatGPT, has garnered significant attention from CIOs.

While some express concerns regarding privacy and ethics, and others grapple with understanding the full potential, there is a growing sense of urgency driven by the fear of missing out (FOMO). Contrary to sceptics’ concerns, the industry has demonstrated a shift in focus towards augmenting the capabilities of financial services professionals, rather than seeking to replace them.

By harnessing the power of large language models, financial institutions aim to centralise knowledge, empowering agents and professionals with essential information to enhance customer experiences and optimise operational efficiency.

An excellent example of this progressive trajectory is Sedgwick, a prominent global provider of third-party claims administration services. It has successfully integrated the Open API version of ChatGPT, named “Sidekick,” into its sophisticated claims system, exemplifying Sedgwick’s commitment to elevating its claim-handling process and delivering unparalleled customer service experiences.

Another notable application gaining traction involves leveraging generative AI to enhance conversational interfaces. By revolutionising conversational capabilities, generative AI enables more human-like responses and facilitates complex interactions. Helvetia, a pioneering force in the insurance services realm, has embarked on a bold endeavour by launching a direct customer contact service utilising OpenAI’s ChatGPT.

This experimental initiative aims to provide seamless access to various financial products, showcasing the vast potential of generative AI in transforming customer interactions.

Energy (Utilities and Oil & Gas)

According to a recent IDC Survey ― Future Enterprise Resiliency & Spending Survey Wave 2, March 2023 (FERS) ―  the utilities industry globally ranks second highest in terms of investments in generative AI technologies for 2023 (40% of respondents), surpassing the global cross-industry average of 24%.

This highlights the enormous potential for innovation, the amplification of human work, and reinvention of work processes in utility companies. The automation of certain tasks and AI-assisted transformation are expected outcomes.

While the utilities industry is still in the exploratory phase of identifying fruitful use cases, generative AI holds significant promise in areas such as content generation for sales and marketing code-generation applications. To improve productivity and employee experience, conversational applications for customer service and CX improvements, and knowledge management, which is especially crucial given the challenge of an aging workforce in the utilities sector.

On the other hand, oil and gas organisations appear to be adopting a more conservative position.

The FERS survey reveals that only 18% of oil and gas companies worldwide are willing to invest in generative AI technologies in 2023.

However, 82% are actively conducting initial assessments to identify potential use cases. These assessments include evaluating the use of generative AI for multi-scenario authentic simulations and predictive capabilities in asset operations, generating subsurface images using fewer seismic data scans in the upstream part of the business, and generating human-like text to provide responses to domain-specific questions for business leaders.

Manufacturing

The early months of 2023 witnessed a surge of interest in generative AI and a renewed focus on AI in general.

While manufacturing organisations have not been early adopters of generative AI, they are gradually recognising the technology’s potential for leveraging vast research resources to create diverse content, including text, video, images, and virtual environments.

Among the respondents to the IDC 2023 Manufacturing Survey, 27% are already investing in generative AI technologies, and an additional 38% are engaged in basic exploration. Knowledge marketing and marketing applications are areas where organisations see short-term benefits, likely due to the availability of user-friendly technology that is easily accessible, such as ChatGPT.

Moreover, manufacturers believe that generative AI can have a significant medium-term impact on various aspects of their operations, such as production planning, quality control, AI-driven maintenance, code generation for programmable logic controllers, product development, design (including modelling, testing, and product life-cycle management), and sales (including client data analysis and content management).

However, there are ongoing challenges in maximising the value of AI/ML in manufacturing organisations. Many organisations still lack the necessary tools to address issues related to data availability and quality. IDC observes that internal capabilities and training in leveraging AI-powered technology and analytical tools are often lacking.

Read blog: Gen AI in an Industrial Environment — Recommendations for Early Adopters

Government

Generative AI tools such as ChatGPT, Bard, Dall-E 2, Vall-E, Stable Diffusion, and others have rapidly transitioned from arcane terms known only to AI experts to subjects of popular discussion in newspapers and TV talk shows within a matter of months.

OpenAI’s launch of ChatGPT in late 2022 sparked a wave of curiosity and speculation among the public, private companies, and public administrations. Initially, policymakers exercised caution, but senior civil servants quickly developed an interest in generative AI. Consequently, some jurisdictions have begun issuing guidelines.

The United Arab Emirates government, for example, has released guidelines encouraging the use of generative AI and providing ideas for potential use cases.

The Portuguese government has announced the “Practical Guide to Access to Justice,” which utilises the ChatGPT platform to help citizens obtain legal information in layman’s terms.

In another intriguing instance, a member of the Italian parliament used generative AI to write a speech, surprising fellow senators by disclosing its computer-generated nature at the end of the debate.

In the long term, generative AI has the potential to improve citizen experiences, amplify the competencies and capacity of civil servants, who often face overwhelming amounts of documents and cases, and aid administrations struggling to hire new talent.

At present, however, no major government entities in Europe, the Middle East, and Africa (EMEA) have implemented generative AI at scale. Nevertheless, numerous ideas, pilots, and prototypes are under development to understand the potential benefits in terms of citizen and employee experiences, increased operational efficiency, enhanced trust and compliance, environmental sustainability, and the governance and technical challenges that need to be addressed.

Healthcare

European healthcare organisations are increasingly recognising the benefits of generative AI in empowering and engaging patients and clinicians.

The most promising area of investment lies in knowledge management applications that enable a more efficient and effective flow of information among healthcare professionals, ultimately leading to better patient care.

For instance, generative AI can be employed to create or integrate more accurate patient histories and identify disease patterns, significantly enhancing the ability to make accurate diagnoses and develop effective treatment plans.

However, effective implementation of generative AI in healthcare faces limitations related to both data and models. Generative AI models require extensive training on large volumes of high-quality data.

Healthcare data quality varies widely, and its availability can be restricted due to privacy and ethical concerns. Additionally, generative AI models have limitations in terms of reproducibility due to their probabilistic nature and complex architecture. This undermines the reliability and trustworthiness of the models, especially when used to support clinical decision-making.

Read blog: Generative AI in Healthcare: Benefits and Risks

Retail

The retail industry is moving faster than the human pace can keep up with. Evolving customer expectations and needs, fierce competition, and the quest for enhanced process efficiency ― among others ― are all factors driving retailers to rush into experimenting with emerging technologies.

In fact, in 2022 newspapers were crowded with titles of bold retailers and brands landing in the metaverse while, in 2023, the focus has already shifted to generative AI. However, while the metaverse initiatives of retailers have already cooled down in favour of new forms of (spatial) computing, generative AI technologies (such as ChatGPT and Dall-E) and solutions powered by LLMs or text-to-image models could have a major transformational business impact across the retail value chain.

IDC data shows that 40% of retailers are in the initial exploration phase of the technology, while 21% are actively investing in the implementation of generative AI tools for the year ahead. We can already see some relevant applications in the areas of product development, merchandising, supply chain, marketing, and customer experience.

Organisations such as Coca-Cola, Mattel, and Carrefour are piloting generative AI applications ― even though still on a limited scale and predominantly with a test-and-learn approach.

According to IDC findings, 50% of retailers are expecting to prioritise generative AI uses cases for marketing in the next 18 months. In particular, generative AI could have a tremendous impact on the automation and personalisation of resource-intensive and time-consuming ecommerce processes such as product page descriptions, images/videos, and marketing copies.

For example, the Chinese ecommerce giant JD.com announced the imminent release of its own retail-specific ChatGPT solution which aims to improve online retailers’ rankings of product listings on SERP, generate product descriptions that are tailored to a shopper’s preferences, and optimise online product images and video generation processes.

Overall, as shown by the IDC data cited above, the most promising and imminent area of investment for generative AI in the retail sector is marketing and, more specifically, digital marketing.

Even if, in the near future, the technology could raise important questions in terms of proprietary data sharing and customer data privacy, without a doubt the use of generative AI for text and image generation could greatly enhance and streamline the ecommerce shopping experience, leading to higher profitability of retailers’ online channels.

Architecture, Engineering, and Construction

The built environment sector has long been considered behind the curve when it comes to productivity and the adoption of digital technology. But emerging technologies, including generative AI, are accelerating innovation across the sector and aligning it with other industries.

According to an IDC Survey (Future Enterprise Resiliency & Spending Survey Wave 2, IDC, March 2023), 25% of resource and construction companies are investing in generative AI technologies this year, just above the industry average.

The potential of generative spans across the building life cycle. When planning and designing a building, drawings and BIM models typically take weeks or months to produce. Generative AI has the potential to generate building designs in an afternoon based on pre-defined criteria such as building codes, site conditions, and sustainability standards.

The construction process is also ripe for innovation: studies find that the need to correct errors during projects accounts for between 5% and 12% of costs. Here, generative AI can create optimised construction schedules and augment supply chain and material planning.

The opportunities extend to a building’s operation through to its demolition and recycling.

As with all industries, these opportunities must be balanced with potential risks. For AEC companies, there are specific physical safety risks associated with using generative AI for the automation of building designs and compliance checks. The correct safeguards and checks will need to be put in place as these technologies are piloted and rolled out.

Generative AI models also require extensive training on large high-quality data sets: the industry’s legacy of digital immaturity and data fragmentation will affect, but not stall, the rate of innovation.

Moving Forward

In conclusion, as the field of generative AI continues to evolve rapidly, it is paramount to cultivate strategies that enable us to navigate through the noise and discern between hype and reality.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

By gaining a clear understanding of the true potential and limitations of this technology, we can effectively harness its power. The wide-ranging applications of generative AI across various industries have the potential to reshape the way organisations manage their businesses and increase efficiency and productivity.

However, amid the excitement and buzz, it is vital to approach the subject with a discerning eye. Adopting an approach based on use cases, which reveals tangible evidence based on real-world results, becomes an imperative for tech vendors and end-user organisations alike.

Drawing upon practical applications and real-world experiences provides invaluable context, allowing us to differentiate between exaggerated claims and genuine achievements. By prioritising the examination of use cases and seeking concrete results, we deepen our understanding of the true potential and limitations of generative AI.

Another angle of the discerning strategy when it comes to generative AI is to rely on subject experts and look for insights that are connected to the industry in question, as experienced professionals in the field are the best source of reliable and up-to-date information. Moreover, this article was written by several humans, embedded by human intelligence with the help of computers, not generative AI.

Contributing analysts: Adriana Allocato, Davide Palanza, Gaia Gallotti, Jan Burian, Louisa Barker, Massimiliano Claps and Sofia Poggi

If you want to know more about generative AI visit our website, or for more in-depth industry insight click here.

Unless you’ve been living under a rock for the past six months, you’ll have heard of generative AI – technology that enables computers to create synthetic data or digital content based on previously created data or content. The launch of ChatGPT in late 2022 lit a fire under this emerging space and seemingly overnight, hundreds of millions of people became inspired by the results of work that had already been going on for years within academic and commercial technology vendor research departments.

Earlier in June we spent two days touring around investment banks and hedge funds in London to talk to investors about generative AI and answer their questions.

 

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

 

We had many great, in-depth discussions. Here are the questions that came up most frequently.

  1. Where is the Value in Generative AI in the Short, Medium, and Long Term?

Today, most of the value is being captured by hardware vendors – most notably NVIDIA, which has seen its share price take off following a sharp upswing in its predicted revenues. As the market leading provider of GPUs with a strong enabling software story and emerging as-a-service play, too, NVIDIA is very well positioned to capitalise on the generative AI boom.

Of course, NVIDIA isn’t the only vendor that potentially stands to benefit; AMD and other semiconductor vendors (including start-ups like Graphcore, Cerebras & Moore Threads) are emerging as challengers, and generative AI platforms will drive storage and networking infrastructure investments too.

In the short to medium term, hyperscale public cloud providers can also expect to benefit significantly. With its early move investing in OpenAI and accelerated investments in generative AI across its software portfolio, Microsoft is in a particularly strong position; but AWS, Google, and Oracle are all also making significant moves in this space.

In the medium-term platform and application vendors also stand to benefit, although the value equation for them is less clear cut. There are significant question marks over which generative AI use cases can support direct monetization, and which will be important to implement from a defensive point of view. Many of the costs associated with managing generative AI models for scale, security, privacy and trust will also fall on their shoulders.

  1. What Will Have to Be True to Make GenAI a Truly Broadly Adopted Technology?

Right now, we’re still in “year zero” for generative AI in a commercial context. There is still a lot of confusion around the technology and its applicability in practical real world use cases.

What is already clear, though, is that publicly shared foundation models delivered as a service (such as those hosted by OpenAI) will only be suitable for a subset of enterprise use cases. For many, enterprises will use fine-tuned, specialised domain-specific models that are made available directly to them on a private (or controlled) basis.

The current state-of-the-art in generative AI yields systems that are prone to accuracy problems, difficult to control and predict, and expensive to run. All of these issues need to be worked on.

  1. Where Are the Implications for the Software Landscape?

Every software vendor that IDC is speaking to is updating or recreating their product roadmaps to incorporate their respective Generative AI strategies. Obviously, this will play out differently across infrastructure, platforms and applications – however there are certain common questions that are being asked:

  • Should we develop our own large language models, or should partner with model providers like OpenAI, Anthropic, Cohere and AI21 and tune them for our software capabilities?
  • How should we price our new Generative AI features?
  • Should we include getting access to customer data to train models as part of a new set of licensing terms and conditions. What do we offer in return (if anything)?
  • Do we need to evolve our support models to include service level agreements (SLAs) on accuracy on certain use cases that are being delivered?

Across all these questions, what is clear is that margin protection will be a major question for software vendors over time – especially those with questionable pricing power. In addition, there will be increased requirements for additional levels of support to deal with model, context and data drift. For the application players, there is an increasing likelihood that forms-based computing as a basis for applications will likely disappear over time and certain markets – for example, salesforce automation and human capital management could potentially be redrawn in the medium-term. 

As part of these changes, what is becoming clear is that the application vendors that are cloud laggards will be AI laggards, and that platforms will continue to dominate the software landscape.

More importantly, incorporating trusted and responsible AI principles into both product development and customer engagement will move from being a differentiator in the short term to table stakes in the medium term.

  1. What Are the Implications for Developers?

There’s been a significant amount of excitement about the ability of generative AI services (such as GitHub CoPilot, Replit Ghostwriter and Warp AI) to generate code, documentation, test scripts, and more.

Today’s state-of-the-art models are not going to put developers out of work. Rather, for some specific types of development work, and for some particular types of software asset being created, generative AI services are very likely to help developers accelerate their efforts to deliver working software, acting side-by-side with human developers in a “CoPilot” arrangement.

But it’s important to keep things in perspective: when we zoom out to consider the broader software delivery lifecycle, pro-innovation developers happy to experiment with new tools tend to bump into deployment, operations and support professionals who are much more risk averse.

  1. What Are the Implications for Services Providers?

Lastly, many of the investment teams we spoke to were very interested in discussing how professional services (particularly IT services) firms might be impacted by generative AI. Will it bring them major new opportunities? Or will its ability to drive automation of knowledge work mean that it forces providers to cannibalise their own businesses?

Our early research shows that more than 65% of early adopters of generative AI capabilities agree or strongly agree that their need for external services providers will be reduced in the future

The potential impact of generative AI on project delivery is, in some ways, analogous to the potential impact of low- and no-code development tools; if providers can embrace these tools effectively and also deliver trusted solutions to clients, they may find fewer hours are required to deliver projects – but outcomes will be improved for everyone.

 

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

 

The arrival of Generative AI technologies has created what we believe to be a seminal moment for the industry: it will be so impactful that it will influence everything that comes after it. However, we believe it is just the starting point. We think that Generative AI will trigger a transition to AI Everywhere – moving us from the use of narrow AI for specific use cases to widening AI for a range of use cases simultaneously.

This means that it will impact every element of the technology stack, and also drive a rethink of all horizontal and vertical use cases. However, given the questions around risk and governance, it will also require every organization to develop and incorporate an AI ethics & governance framework to deal with the risks mentioned earlier.

The investors that we spoke to in London agreed that the tech industry needs to take balanced approach to commercializing the opportunity, while also ensure that policies and regulations continue to protect consumers, enterprises and society as a whole.

Neil Ward-Dutton - VP AI, Automation, Data & Analytics Europe - IDC

Neil Ward-Dutton is vice president, AI, Automation, Data & Analytics at IDC Europe. In this role he guides IDC’s research agendas, and helps enterprise and technology vendor clients alike make sense of the opportunities and challenges across these very fast-moving and complicated technology markets. In a 28-year career as a technology industry analyst, Neil has researched a wide range of enterprise software technologies, authored hundreds of reports and regularly appeared on TV and in print media.

The first half of 2023 saw a surge of interest in generative AI (GenAI) that bordered on hysteria. For a few months, the world’s communications channels were abuzz with talk about its potential to impact almost every area of personal, social, and business life. Even industrial organizations started to examine if GenAI could add value to their operations.

GenAI opens access to a wealth of research that can be leveraged to generate a broad diversity of new content. Algorithms can be trained on existing large data sets and used to create content including text, video, images, even virtual environments.

We observe three ways that industrial users can get in touch with GenAI:

  1. Publicly Available Tools: ChatGPT-like tools provide users with information, content generation, or codes. These publicly available tools and apps provide solid value to users. From a process area point of view, the great benefits come from gaining market and supply chain intelligence, procurement intelligence, and training. However, these applications are not ideal for industrial use. Some organizations have even banned using them to prevent sensitive data leakage.
  2. Embedded Enterprise Solutions: GenAI can be embedded in enterprise solutions like enterprise resource planning (ERP), product life-cycle management (PLM), and customer relationship management (CRM) systems. They can be present as “copilots,” or an AI system designed to assist and support human users in generating or creating content using GenAI techniques. Most technology vendors are already implementing GenAI technology in their enterprise solutions, enabling organizations to benefit from it in areas like service management, supply chain planning, and product development.
  3. Use Cases and Apps: Developers can use GenAI to create or empower use cases and to develop apps. My IDC colleague John Snow believes GenAI can bring real value to a wide variety of business areas, assuming it has been trained on relevant data. This means we will see the creation of GenAI solutions specific to areas of expertise (e.g., product design, manufacturing, service/support), industries (e.g., automotive, medical devices, consumer products, chemical processing), and individual companies. Such focused tools will augment — and in some cases challenge — human-generated knowledge and experience as we know it.

Download eBook: Generative AI in EMEA: Opportunities, Risks, and Futures

Be Ready — But Careful

In operations-intensive environments like process manufacturing, AI may provide a handful of beneficial use cases. These could include production planning models and the predictive maintenance of complex simulations through soft sensors.

Users have already learned to leverage the power of AI in daily operations in a safe way (i.e., in areas where the impact of a potential failure on the physical environment is minimal). Image recognition models, for example, can be trained on available data sets, enabling the model’s outputs to be verified against a standard.

AI is already part of countless aspects of manufacturing — but the reliability of AI-generated outputs remains unsettled. IATF 16949 is a great example. A global quality management standard developed for the automotive industry, it provides requirements for the design, development, production, and installation of automotive-related products. However, the standard does not explicitly cover AI or provide specific requirements for AI implementation.

AI can still be relevant in the automotive industry, however, and its applications may have implications for quality management. AI can be used in areas such as autonomous vehicles, predictive maintenance, quality control, and supply chain optimization.

Standards and regulations are continuously evolving — and new guidelines specific to AI or emerging technologies within the automotive industry may be developed in the future to address their unique considerations and challenges.

Output Challenges

Like any other methodology that serves industries, GenAI outputs must be 100% reliable. Most readers are probably familiar with the application of reproducibility and repeatability. Let me remind you that reproducibility allows for more accurate research, whereas repeatability measures that accuracy and confirms the results. Both are a means to evaluate the stability and reliability of an experiment and are key factors in uncertainty calculations of measurements.

GenAI-based tools might seem to be a black box for many potential industrial users. GenAI bias is a significant fear. This refers to the potential for biases to be present in the outputs or generated content produced by GenAI models. These biases can arise from various sources, including the training data used to train the models, the algorithms and techniques employed, and the inherent biases present in human-generated data used for training.

GenAI models learn patterns and structures from large data sets. If those data sets contain biases, the models can inadvertently learn and perpetuate those biases in their generated content. For example, if a GenAI model is trained on text data that contains biased language or stereotypes, it may generate text that reflects those biases.

GenAI bias can have several implications. It can perpetuate stereotypes, reinforce discriminatory practices, or generate content that is misleading or unfair. In some cases, GenAI bias can lead to the amplification of existing societal biases, as the generated content may reach a wide audience and influence perceptions and decision-making processes.

Addressing GenAI bias is a crucial aspect of using it properly — and mitigation of bias is a crucial stepping stone to increasing the technology’s reliability. Model creators and owners should ensure that the data used to train GenAI models is diverse, representative, and free from explicit biases.

If possible, mechanisms to detect and mitigate bias during the training and generation process should be implemented. Generated outputs should be continuously evaluated and monitored for biases. This includes the establishment of feedback loops with human reviewers or subject matter experts who can provide insights and flag potential biases.

We recommend striving for transparency and explainability. Make efforts to understand and interpret the internal workings of models to identify sources of bias and address them effectively. User feedback and iteration of GenAI models based on that feedback is encouraged.

Users must also be wary of GenAI “hallucinations,” or situations where a GenAI model produces outputs that appear to be realistic but are not based on real or accurate information. In other words, the AI system generates content that is plausible but may not be grounded in reality. For example, a generative AI model trained on images of defects may generate new images of defects that resemble those in an existing defect category but do not actually exist.

Avoiding AI hallucinations entirely is challenging, but there are several actions that can be taken to limit occurrence or minimize impact. Let’s touch on a few: It is crucial to ensure that your AI model is trained on a diverse and representative data set that covers a wide range of examples from the real world. To improve the quality and reliability of the model’s outputs, the training data should be preprocessed and cleaned to remove inaccuracies, outliers, or misleading information. The model’s outputs should also be continuously evaluated and monitored to identify instances of hallucination or generation of unrealistic content.

Register for the Webcast: Generative AI in EMEA: Opportunities, Risks, and Futures

Evolving Challenges

Because they involve generating new and original content without explicit programming, proving the reliability of GenAI models can be challenging. However, there are several approaches you can take to assess and provide evidence of the reliability of GenAI models.

Commonly used methods include defining and utilizing appropriate evaluation metrics to assess the quality and reliability of generated content. Evaluation by humans is useful, including subjective evaluations that involve assessing and rating the quality and reliability of generated content.

For some specific use cases (e.g., copilots), test set validation can be utilized. This includes creating a test set of specific scenarios or inputs representative of the desired output and evaluating the generated results against these inputs.

Adversarial testing can also be employed to deliberately introduce challenging or edge cases to the GenAI model to assess its robustness and reliability. As GenAI outputs evolve, it is recommended that long-term monitoring be used to continuously track and evaluate the performance and reliability of the model. This could be applicable, for example, in supply chain intelligence GenAI-powered applications.

The Sky is the Limit — For Now

In the industrial environment, we are still scratching the surface of what GenAI can do. Organizations should collaborate with tech vendors and service providers to understand the value of GenAI and turn it into a significant competitive advantage. Regulators may try to restrict or otherwise control GenAI technology, but the cat is already out of the bag. Development is inevitable.

To get first-hand information about the development of GenAI, organizations should follow well-known AI technology specialists, as well as start-ups and hyperscalers. Hyperscalers like Google, Microsoft, and Amazon are at the forefront of AI research and development. They invest significant resources in exploring and advancing AI techniques, including GenAI. Hyperscalers often offer cloud-based AI services and platforms that include GenAI capabilities. Keeping up with their offerings can help you understand the latest tools and services available for developing GenAI applications.

Managers traditionally expect to start seeing ROI for tech like GenAI within 1.5 years — but with the right IT infrastructure in place to deliver scalability of GenAI tools, an ROI target could be reached within months. Improved customer service, for example, brings additional revenues almost immediately. And process optimization using data intelligence can provide improved productivity while reducing costs incurred due to poor quality.

Beware the Competition!

GenAI is poised to revolutionize the manufacturing industry, enabling manufacturers to unlock new levels of efficiency and innovation. From product design to supply chain optimization, GenAI can have a significant impact on KPIs.

But beware: Do not allow the competition outrun you in terms of GenAI adoption. Stay on top of developments and act before competitors use GenAI to threaten your business.

At the same time, do not underestimate the risk of intellectual property (IP) leakage, or the unauthorized use, disclosure, or exposure of valuable intellectual property through the utilization of generative AI models. Embed an IP leakage prevention mechanism in your general AI and data governance. This should include removal or anonymization of sensitive or proprietary information from training data sets.

As always, stay busy with what works — but keep an eye focused on the future. Embracing this transformative technology is a crucial step toward more efficient and innovative prospects for businesses of any size.

You often see it on television: programs about people who are struggling financially. They run out of money at the end of the month, they can’t sell their house, they have a problematic debt burden, and so on. A common denominator is often the lack of insight into their own situation, and while coming up with ways to save money may not be very difficult, actually implementing and sticking to them is much harder.

I mean, it’s easy for an outsider to suggest that someone should get rid of their dog, but if that pet is their only source of comfort, it will take some effort.

The same goes for cloud costs: saving money is easier said than done. There are all sorts of great tools available from both cloud providers and third parties to help you understand your costs.

These tools provide various reports and dashboards, and even recommendations on which instances to remove or resize (rightsizing). With the right knowledge, you can also determine how to use discount options (reserved instances, savings plans, reserved capacity, etc.), how to manage licenses intelligently, and what you can do in your application architecture to save costs. And, of course, you can always turn off instances when you’re not using them.

All of this insight is great, but then comes the second part. Just as people have a hard time saying goodbye to their pets, users and administrators have a hard time shedding their old habits and ways of thinking. And that’s something cloud providers never talk about.

For example, consider turning off instances outside of working hours. In theory, this is an excellent way to save money, but instances are part of applications, which in turn are part of chains. It can happen that data exchange takes place in a chain outside of working hours.

Testing teams that are under a deadline may also need their environment outside of the predetermined working hours. And if environments are used in the management chain, they must also be available after working hours in case of an emergency. So savings are theoretically simple, but practice is more complicated. It can be done, but it takes a lot of effort.

Rightsizing is also less straightforward than it seems. Users and administrators are often hesitant to remove capacity: users see their performance decrease, and administrators see the risk of more outages because there is less excess capacity to handle issues. In the latter case, you need to analyze where these issues are coming from: a poor application can benefit from more capacity, but that is not a long-term solution.

If the roof is leaking, you can replace the bucket you use to catch the water with a mortar tub, but even that will eventually fill up. Ultimately, you’ll have to repair the roof.

So, objections can be raised for all types of savings. Eventually, you’ll need to adopt an approach that not only makes costs visible but also involves users and administrators, and leads to the right considerations on where to save on your cloud costs and where not to.

Don’t know where to start? Can’t figure it out quickly enough? IDC Metri has helped several organizations get started. Our specialists can help kickstart your cost-saving efforts in the cloud. Because understanding costs is one thing, but it’s only useful if they actually decrease.

 

Want to learn more? Subscribe to IDC Metri’s monthly newsletter full of actionable insights on IT benchmarking, intelligence, sourcing and more.

I was born in Ravenna, on the east coast of Emilia-Romagna, one of the most liveable and prosperous regions in Italy. Emilia-Romagna is home to 7.3% of the Italian population. It accounts for 9.2% of GDP and 11.8% of agricultural production.

It headquarters globally successful firms in automotive, motorbikes, food production, ceramic tiles, textile and fashion, biomedical engineering, construction, woodworking equipment and much more. Unemployment is at 5.1%, well below the 2022 national average of 8.2%. Life expectancy is higher than the national average.

There are white sandy beaches, natural reserves in coastal wetlands, and beautiful hills and mountains, which combined with a rich heritage — Ravenna alone boasts eight UNESCO heritage sites — and amazing food and wine attract tens of millions of tourists every year.

Besides these material treasures, there is a unique way of living in Emilia-Romagna. And even more so in Romagna, where I grew up; there’s an old saying that you can tell if you are in the Romagna part of the region because when a stranger shows up at someone’s door, they are welcomed with a smile and a glass of wine. On the Emilia side, they’ll be equally warmly welcomed, but with a glass of water!

There is a sense of shared joy, a passion for life and a pride in belonging to one’s community. A shared sense of resilience that drives people to go through the hardness of life with a smile on their face, and always trying to put a smile on someone else’s. Because there is always a little bit of magic, even in the small things.

As Federico Fellini, the world-famous movie director and one of the most beloved children of our region, once said: “Life is a combination of magic and pasta.”

It feels good to be a Romagnolo. And to visit Romagna … unless you happened to be there in the first two weeks of May 2023.

Smart River and Water Management: Preparing for Foreseeable Disasters

After many months of drought, in the first 17 days of May 2023, Romagna was hit by as much rain as it usually gets in six months. In some areas this meant up to 400mm of rain in two weeks. To put things in perspective, one of the worst hit municipalities, Faenza, which is home to 60,000 people, experiences on average 760mm of rain a year.

The stereotypical rainy London gets 690mm a year. The result of this unusually heavy rain was that 23 rivers burst their banks, resulting in 50 floods; 305 landslides devastated hills and mountains, 14 people died and over 36,000 people were displaced from their homes. The estimated economic damage to homes, factories, farms and public infrastructure is north of €5 billion, with around €600 million just to rebuild public infrastructure.

Climate change is increasing the frequency and intensity of these extreme weather events. Long-term environmental sustainability actions, which are progressing way too slowly, will not be enough.

Resilience to short-term shocks is imperative. Money is not the problem; in fact, there is an estimated €8 billion available from the Italian COVID Recovery and Resilience Plan and the “Italia Sicura” (Safe Italy) plan to make public infrastructure more resilient. This, however, is at risk of not being spent, or not spent well, because of lack of planning, skill gaps, slow public procurement, and insufficient competencies and capacity to audit.

Technology innovation is not a silver bullet, but when implemented wisely it can help fill some of those gaps. The increasing availability and granularity of data from satellite images, IoT sensors, weather monitoring and forecasting models already tell us that Italy has the highest amount of rain in Europe, with 300 billion cubic meters a year.

Building permitting systems, public works inspection systems and other sources tell us that Emilia-Romagna was the fourth worst region in terms of soil consumption in Italy in 2021, including in areas at high risk of flooding. By building on the existing knowledge, collecting more data and turning the data into intelligent smart river and water management insights, governments, water utilities and the public could make better decisions across the disaster resilience life cycle, from mitigation to preparedness, from response to recovery.

  • Mitigation: Governments can use a wide variety of tools to develop hazard maps that can identify areas most at risk and feed into planning and preparedness systems. Policymakers and building inspectors can feed intelligent insights into planning and operational simulation tools, such as digital twins, to simulate the impact of building code and permitting decisions to reduce soil consumption and require the use of more resilient building techniques and materials.
  • Preparedness: The benefits of building flood resilient systems (dams, levees, flood walls and diversion canals, etc.) to protect natural systems such as wetland, marshes and beaches, and using resilient building techniques such as tiled pavements instead of concrete for parking lots and roads to increase water absorption, can be augmented by making these assets and tools intelligent. The intelligence from those systems can enable real-time or preventive decisions about diversion tactics, rather than reacting only when the flood is too close.
  • Response: Real-time data from weather forecasting models, integrated with data from dam and river sensors, should be analysed to detect anomalies to automatically raise emergency alerts that can then promptly notify citizens, rather than having to rely on fire and police patrols roaming the roads of small rural villages and towns using loud speakers to tell citizens to evacuate homes or expecting mayors to post videos on social media hoping everybody pays attention, as happened in the past two weeks in Romagna. More intelligent use of data can also provide insights for command-and-control personnel to coordinate first responders and orchestrate the supply of food, clothes and medicine for shelters, instead of relying on emails, spreadsheets and phone calls.
  • Recovery: Digital twins would allow evidence-based infrastructure planning decisions and monitoring the progress of investments aimed to rebuild infrastructure, therefore increasing speed and transparency of projects to avoid wasting time and money. AR/VR tools can help engineers conduct inspections when anomalies are detected.

The same technology infrastructure — with a few additions in terms of sensors and applications — will provide intelligent insights for other use cases, such as water conservation in dry seasons, leakage reduction, biodiversity protection in rivers, marshes and ports, sustainable water transportation, and water quality.

Only two days after the peak of the emergency, millions of euros, as well as food, clothing and other supplies, had been donated to flooded areas in Emilia-Romagna from all over Italy and beyond. Boosted by the typical Romagnolo spirit, spontaneous neighbourhood efforts have mushroomed to clean mud from houses, roads and farms. Beaches have already been cleaned for the upcoming tourist season. But that resolve to recover quickly should not allow us to forget what happened. We know what the future holds. Extreme weather events will happen, not only in well-known high-risk flooding areas, such as the Indian Subcontinent, Southeast Asia, and Pacific and Caribbean Islands, but also in traditionally safer regions of the world.

Technology innovation will be critical to climate change resilience. But technology alone will not be enough. It’s not enough to feel compassion to help when disaster happens. We need to invest in mitigation and preparedness measures that generate the highest long-term returns.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.

AI Act: How Did We Get Here and Where Are We Now?

In April 2021, the European Commission submitted a detailed proposal of its plan to regulate artificial intelligence development and use in Europe: the AI Act. The AI Act’s goal is to ensure that the development and deployment of AI systems in Europe is safe, transparent and compliant with the EU’s fundamental rights and values ― protecting the public, while still fostering innovation.

The Commission adopted a “general approach” on a set of harmonized rules on artificial intelligence in November 2022, but rapid progress of the technology, together with the sudden wave of innovation in Generative AI systems, delayed the final discussion of the legislation as new amendments to cover the latest developments were explored. On May 11, the European Parliament committees approved the AI Act with a large majority in a vote that paves the way to the plenary vote in mid-June (June 14 as a tentative date).

Let’s now look at the main principles of the proposed regulation and how it will impact the AI market in the region.

Regulating the Development and Deployment of AI in the EU ―  Key Aspects of the AI ACT

The proposal identifies three (+1) risk categories for AI applications and applies different restrictions and obligations on system providers and users, depending on the category of the application in question:

  • Unacceptable risk: applications that involve subliminal practices, exploitative or social scoring systems by public authorities. Such applications will be banned.
  • High risk: applications related to education, healthcare and employment, such as CV-scanning, ranking job applicants, will be subject to specific legal requirements (e.g., ensure transparency and safety of the systems, complying with the Commission’s mandatory conformity requirements). Providers of “high-risk” systems will have obligations to establish quality management systems, keep up-to-date technical documentation, undergo conformity assessments (and re-assessments) of the systems, conduct post-market monitoring, and collaborate with market surveillance authorities.
  • Limited risk: this mostly includes AI systems such as chatbots that will be subject to specific transparency obligations (e.g., disclosing that interactions are performed by a machine, so that users can take informed decisions).
  • Minimal risk: applications that are not listed as risky, nor explicitly banned are left largely unregulated (e.g., AI-enabled video games). Currently, this category covers the majority of AI systems used in the EU.

How Will the AI Act Affect the European AI Landscape?

The introduction of the European AI Act has sparked discussions on its potential impact on the adoption of AI technologies. Will this regulation hinder AI innovation in Europe? The answer is not straightforward, as it depends on various factors and the evolving landscape.

AI regulation may impose compliance costs, administrative burdens, and legal uncertainty on businesses and developers. Extensive testing, validation, and monitoring of AI systems may become necessary, which can be time-consuming and expensive. There might also be limitations on the types of applications, industries, data, or algorithms used in AI systems.

However, when assessing the direct impact on AI use cases falling under the regulated risk categories, the outcome is not overwhelmingly negative. When we at IDC built a data model to verify which and how many AI use cases will be directly impacted (we considered those that would fall into the above listed risk categories) the outcome was only modest, and we have not seen the impact, defined by possible lost revenue, to be worrying.

The compliance costs and administrative burdens could be challenging for SMEs and startups, though, which may inhibit competition in Europe if larger, more established providers find it easier to comply.

Industries like healthcare, public administration or finance are likely to face more stringent requirements due to their potential impact on human life and safety. Transparency, explainability, human oversight, and restrictions on the use of, for example, biometric identification technologies are some of the obligations that might be imposed. While these requirements may limit certain applications, they also aim to protect privacy and individual rights. However, it’s important to note that this regulation offers a list of exemptions, so if you are a provider for national security interests, you may not need to worry about that too much.

On the positive side, regulation has the potential to enhance wider trust and confidence in AI systems. This is crucial in countering overhyped pop culture-fed media narratives of AI as a threat. A trusted regulatory framework always reduces legal uncertainty and creates a level playing field for businesses, public institutions and consumers and citizens. Wisely designed laws will improve the quality and safety of AI systems and will first and foremost safeguard individuals.

The AI Act aims to encourage AI technologies that align with ethical and societal values that the EU strongly supports, such as transparency, accountability, and human-centricity. It wants to stimulate research and development in these areas and promote collaboration and openness among organizations and regions. By establishing common standards and best practices, the EU facilitates knowledge exchange and expertise sharing.

Conclusion

Looking at AI regulation through the lens of healthcare offers valuable insights. Healthcare regulations ensure safety, efficacy, and patient rights. They impose requirements on manufacturers to meet necessary standards. Similarly, AI regulations can ensure ethical and safe technology use while balancing innovation and protection.

While the potential impact of the European AI Act on AI adoption and innovation may present challenges, it also offers opportunities. By adhering to the regulatory framework, AI providers can navigate the landscape effectively, gain public trust, and promote responsible AI practices.

As the AI Act progresses, it is crucial to stay updated with the latest developments. At IDC, we will closely follow the progress of the AI Act and will continue publishing comprehensive research, providing deeper insights into its implications and potential impact as we approach the EU vote in June.

 

If you want to know more about this, please contact the team: Lapo Fioretti, Andrea Siviero, Neil Ward-Dutton or Ewa Zborowska

Lapo Fioretti - Senior Research Analyst - IDC

Lapo Fioretti is a Senior Research analyst in IDC Digital Business Research Group, leading the European Emerging Technologies Strategies research. In his role, he advises ICT players on how European organizations leverage new technologies to create business value and achieve growth and analyzes the development and impact of emerging trends on the markets. Fioretti also co-leads the IDC Worldwide MacroTech Research program, focused on the intertwined connection between the Economical and Digital worlds - analyzing the impact key MacroEconomic factors have on the digital landscape and viceversa, how technologies are impacting economies around the world.

At IDC’s UK & Ireland Security Summit 2023, on April 17, 2022, 60 security leaders from across the UK and Ireland discussed the key theme of the event — “Security Strategy 2023: Managing Risk to Enable Digital Business”.

The summit featured an impressive panel of speakers from our partners and the CISO community, complemented by insights from the IDC’s European Security and Privacy team. Based on the presentations, workshops, and roundtable discussions from over 20 sessions, our top five European cyber security trends are as follows:

  1. Threat Landscape

Security practitioners are aware that their attack surfaces are expanding due to digital transformation, remote work, IoT and mobile adoption, and an increasing reliance upon the Web for conducting all aspects of a business. Cyber threats facing organizations are diverse and fast-changing. The ability to understand and mitigate risk depends upon having a clear view on the complexity and dynamic nature of the threat landscape. Who might the threat actors be? How are they trading in terms of selling enterprises’ credentials and vulnerabilities? Employees and contractors at organizations continue to be a point of entry for successful cybercrime. This may be credential theft or more simply end users clicking on malicious links. Standards for security hygiene must be continually assessed and addressed; for example, avoidance of the use of guessable password formats, conducting regular back-ups on different mediums including immutable data back-up and limiting the use of unsanctioned IT or Bring Your Own Device (BYOD).

Businesses should challenge the security industry on how technology vendors and MSSPs can drive security behind the scenes; so that malicious URLs and emails do not appear in the inbox or browser in the first place. Thus, security should become more invisible and frictionless.

  1. The Evolving Security Leadership Role

IDC sees the CISO role as a communications conduit to the board and the C-Suite on strategic security topics. It has become important for security leaders to have expanded skills broader than the technicalities of security. The modern CISO needs the capability to understand the overall business strategy and direction: inevitably this will include digital transformation or digital business elements. The CISO must ensure that security outcomes delivered are consistent with business strategy and digital initiatives.

  1. The Importance of Cyber Crisis Readiness

A senior speaker from a European government national defence agency highlighted how demonstrations of crisis response during a major global sporting occasion was a valuable exercise, as it gave leaders first-hand experience of how the response to crisis is handled in a realistic scenario. In this example the crisis response group brought in senior government officials to witness crisis response activities. Major cyber-attacks on critical national infrastructure have become national security event, and predetermined crisis centres are essential to give the most effective response to serious incidents. The key takeaway is that security leaders should explore bringing the C-suite and Board into cyber crisis simulation “rooms” to imitate a major attack and use this to critically evaluate responses amongst the executive leadership, as well as build in muscle memory so that appropriate responses are more automatic.

  1. Generative AI

It’s agreed that generative AI will have a transformative effect across all aspects of the technology industry, including cyber security. Generative AI is already a major issue as far as cybersecurity is concerned, with generative AI, for example, making phishing attacks much harder to detect. Businesses and governments should be encouraged to move quickly in understanding and responding to these new threats. Unskilled would-be cyber criminals can potentially create malware code using OpenAI, and thus the barriers for entry are now lower than ever, which is driving up the number of potential threat actors and cyber-attack volumes. On the other hand, the application of generative AI can help security teams build up their defences, by applying generative AI to SOC automation and SIEM/SOAR triage.

  1. Security Skills Shortages and Lack of Diversity

There continues to be a major skills shortage in cybersecurity that’s been around for a decade. There are initiatives in place to address this, but organizations must do more to address the skills shortage and lack of diversity. MSSPs and security technology vendors should lead on up-skilling and diversity in the industry, by driving training programs, internal skills transfer programs, and efforts to encourage and motivate a more diverse workplace.

Railways are becoming increasingly strategic. They are more energy efficient and pollute less than private vehicles, and they are 15 to 20 times safer than cars.

Compared with private vehicles, they do not entail any fixed cost for travellers. No wonder governments around the world are making huge investments in rail. For instance, 21 out of 27 EU member state national recovery plans have allocated billions to invest in electrification and modernisation of rail infrastructure. President Biden’s Bipartisan Infrastructure Law has nearly tripled funding for rail infrastructure — to $1 billion a year for the next five years.

Airlines struggled to survive when COVID reduced traffic to unprecedented levels. Fuel price increases and labour shortages compounded the effect of COVID by creating the urgency to profoundly rethink business and operating models, while regulators and passengers demand accelerated investment in environmental sustainability, such as more fuel-efficient traffic management, more sustainable fuels and, in the future, zero-emission aviation.

Both industries have reached an inflection point. Hiring more people and growing the size of fleets and number of routes will not be enough to increase capacity utilisation and offer more competitive and personalised services, while maintaining high safety standards and improving environmental sustainability. Achieving those strategic goals will require railway and airline executives to invest in technology innovation.

Bold Ambition for the Future Will Depend on Realising the Value of Technology Innovation

Railways and airlines have invested in technology for many years to deploy digital customer experience capabilities, such as loyalty programmes, self-service booking and mobile payments, intelligent asset and fleet management capabilities to enhance operational excellence, and scheduling of routes and dispatch to bring together high-capacity utilisation and safety.

However, our recent studies show that they are not standing still. They are now looking at the next generation of technologies, such as 5G, artificial intelligence and machine learning, IoT and edge computing, augmented and virtual reality, even quantum computing for traffic optimisation. They are not doing so for the sake of technology, but to achieve four interdependent strategic business goals:

  • Increase operational efficiency, while targeting net-zero impact​
  • Increase capacity utilisation by combining intelligent scheduling, dispatch and traffic control systems to increase frequency of travel and smart predictive operations to help prevent delays and disruptions 
  • Ensure that efficiency goes hand in hand with safety and security, even with higher utilisation rates thanks to digitally enabled physical security systems, regulatory compliance of operations and cybersecurity​
  • Increase revenue growth through innovative service offerings, often by making their services and hubs — stations and airports — the anchors of a mobility-as-a-service ecosystem

To empower railway and airline executives to make strategic choices about next-generation technology investments, implement new organisational competencies and capacities that accelerate technology investment benefit realisation, and select tech partners that understand the technical and business evolution of their industry, IDC has launched new research on railways and airlines and transportation hubs.

Stay tuned for upcoming research on topics such as ticketing and revenue management, digital twins for intelligent operations, 5G and cybersecurity.

Massimiliano Claps - Research Director - IDC

Massimiliano (Max) Claps is the research director for the Worldwide National Government Platforms and Technologies research in IDC's Government Insights practice. In this role, Max provides research and advisory services to technology suppliers and national civilian government senior leaders in the US and globally. Specific areas of research include improving government digital experiences, data and data sharing, AI and automation, cloud-enabled system modernization, the future of government work, and data protection and digital sovereignty to drive social, economic, and environmental outcomes for agencies and the public.