SD Times news digest: Docker Desktop available for Apple Silicon, GitHub Actions with GitHub CLI, and new Harness integrations

Docker Desktop is now supported for all devices using the Apple Silicon.

Users can build and run images for both x86 and ARM architectures without having to set up a complex cross-compilation development environment.

Docker Hub also makes it easy to identify and share repositories that provide multi-platform images. 

Additional details are available here.

GitHub Actions with GitHub CLI 
GitHub Actions are now available in a developer’s terminal with GitHub CLI, giving users insight into their workflow runs and files from the comfort of their local terminal with two new top-level commands, ‘gh run’ and ‘gh workflow.’

With the new ‘gh run list,’ users also receive an overview of all types of workflow runs whether they were triggered via a push, pull request, webhook or manual event.

‘Gh run watch’ also helps users stay on top of in-progress workflow runs, which can be used to either follow along as a workflow run executes or combine it with other tools that send alerts when a run is finished. 

Harness announces new integrations 
Harness announced new integrations with AWS GovCloud, Azure and GCP that provide DevOps and financial management teams with a CI/CD platform for multi-cloud deployments and enhanced cost visibility.

“As more enterprises look to do more with their software and reduce infrastructure costs, they’re turning to multi-cloud architectures to improve uptime, avoid vendor lock-in and gain price leverage,” said Jyoti Bansal, CEO and co-founder, Harness. “With these integrations, Harness is answering that call, providing an abstraction layer between cloud deployment complexity and developers, so every company can deliver next-generation software faster than ever.”

Harness gives customers access to all major public clouds directly and provides the same  deployment and cost management experiences that users have come to expect from managing applications hosted on Kubernetes, according to the company in a post.

Windows Terminal Preview 1.8 
With the new preview version, the settings UI now ships inside the Windows Terminal Stable build. 

The settings UI also has a new font face dropdown option, new command line arguments and the base layer has been removed.

Developers now have the ability to name their terminal windows, which makes it easier to identify windows when using wt CLI arguments. 

Additional details on the updated version are available here.

The post SD Times news digest: Docker Desktop available for Apple Silicon, GitHub Actions with GitHub CLI, and new Harness integrations appeared first on SD Times.

Read more:


Observability: A process change, not a set of tools

If you do a Google search for the phrase “observability tools,” it’ll return about 3.3 million results. As observability is the hot thing right now, every vendor is trying to get aboard the observability train. But observability is not as simple as buying a tool; it’s more of a process change — a way of collecting data and using that data to provide better customer experiences. 

“Right now there’s a lot of buzz around observability, observability tools, but it’s not just the tool,” said Mehdi Daoudi, CEO of digital experience monitoring platform Catchpoint. “That’s the key message. It’s really about how can we combine all of these data streams to try to paint a picture.”

Observability: It’s all about the data
A guide to observability

If you go back to where observability came from — like many other processes, it originated at Google — its original definition was about measuring “how well internal states of a system can be inferred from knowledge of its external outputs,” said Daoudi. 

Daoudi shared an example of observability in action where one of Catchpoint’s customers was seeing a trend where customers complained a lot on Mondays and Tuesdays, but not on Sundays. The server load was the same, but the services were slower. Through observability, the company was able to determine that backup processes that only run on weekdays were the culprit and were impacting performance. 

“Observability is about triangulation,” said Daoudi. “It’s about being able to answer a very, very complex question, very, very quickly. There is a problem – where is the problem? The reason why this is important is because things have gotten a lot more complex. You’re not dealing with one server anymore, you’re dealing with hundreds of thousands of servers, cloud, CDNs, a lot of moving parts where each one of them can break. And so not having observability into the state of those systems, that makes your triangulation efforts a lot harder, and therefore longer, and therefore has an impact on the end users and your brand and revenue, etc.”

This is why Daoudi firmly believes that observability isn’t just a set of tools. He sees it as a way of working as a company, being aligned, and being able to have a common way to collect data that is needed to answer questions. 

The industry has standardardized on OpenTelemetry as the common way of collecting telemetry data. OpenTelemetry is an open source tool used for gathering metrics, logs, and traces — often referred to as the three pillars of observability. 

The three pillars are often referenced in the industry when talking about observability, but Ben Sigelman, CEO and co-founder of monitoring company Lightstep, believes that observability needs to go beyond metrics, logs, and traces. He compared the three pillars to Steve Jobs announcing the first iPhone back in 2007. Jobs started off the presentation by announcing a widescreen iPod with touch controls, a “revolutionary” mobile phone, and a breakthrough internet communications device, making it seem as though they were three separate devices. 

“These are not three separate devices,” Jobs went on to clarify. “This is one device, and we are calling it iPhone.”  Sigelman said the same is true of telemetry. Metrics, logs, and traces shouldn’t be known as the three pillars because you get all three at once and it’s one thing: telemetry.

Michael Fisher, group product manager at AIOps company OpsRamp, broke observability data down further into two signals: symptomatic signals and causal signals. Symptomatic signals are what an end user is experiencing, such as page latency or a 500 Internal Server Error on a website. Causal signals are what cause those symptomatic signals. Examples include CPU, network, and storage metrics, and “things that may be an issue, but you’re not sure because they’re not being tied to any symptom that an end user might be facing.” 

Monitoring tools tend to focus mostly on the causal signals, Fisher explained, but he recommends starting with symptomatic signals and working towards causal signals, with the end state being a unit of the two. 

“When something is going wrong [the developer] can search that log, they can search that trace and they can tie it back to the piece of code that’s having an issue,” said Fisher. “The operations team, they may just see the causal symptoms, or maybe there is no causal symptom. Maybe the application is running fine but users are still complaining. Tying those two together is kind of a key part of this shift towards observability. And that’s why I talk about observability as a development principle because I think starting with the symptomatic signals with the people who actually know is a huge paradigm shift for me because I think some of the people you talk to or ITOps teams you talk to is that monitoring is their wheelhouse, whereas many modern shops, OpsRamp included, much more monitoring actually happens on the development team side now.”

Providing good end user experience is the ultimate goal of observability. With monitoring, you might only be focusing on those causal signals, which might mean you miss out on important symptomatic signals where the end user is experiencing some sort of service degradation or trouble accessing your application. 

“When I talk about using observability to drive end-user outcomes, I’m really talking about focusing on observing the things that would impact end users and taking action on them before they do because traditionally this focus on monitoring has been at a much lower level, layer 3, I care about my network, I care about my switches,” said Fisher. “I’ve talked to customers where that’s all they care about, which is fine but you start to realize those things really matter less once you move up the stack and you have a webpage or you have a SaaS application. The end user will never tell you that their CPU is high, but they will tell you that your webpage is taking 10 seconds to load and they couldn’t use your tool. If an end user can’t use your tool who gives a damn about anything else?”

It’s important that observability not just stay in the hands of developers. In fact, Bernd Greifeneder, CTO of monitoring company Dynatrace, believes that if developers just do observability on their own, then it’s nothing more than a debugging tool. “The reason then for DevOps and SREs needs to come into play is to help with a more consistent approach because these days multiple teams create different microservices that are interconnected and have to interplay. This is sort of a complexity challenge and also a scale challenge that needs to be solved. This is where an SRE and Ops team have to help with standing up proper observability tooling or monitoring if you will, but making sure that all the observability data comes together in a holistic view,” he said. 

SRE and Ops teams can help make sure that the observability data that the developers are collecting has the proper analytics on top of it. This will enable them to gain insights from observability data and use those insights to drive automation and further investments into observability. “IT automation means higher availability, it means automatic remediation when services fail, and ultimately means better experiences for customers,” Greifeneder said. 

When looking into the tools to put on top of your observability data to do those analytics, Tyler McMullen, CTO of edge cloud platform Fastly recommends constantly experimenting to see what works for your team. He explained that often these observability vendors charge a lot of money, and teams might fall into the trap of buying a solution, putting too much observability data into it, and being shocked when they’re charged a lot of money to do so. 

“Are the pieces of information that we’re plugging into our observability, are they actually working for us? If they’re not working for us, we definitely shouldn’t have them in there,” said McMullen. “On the other hand, you only really find out whether or not something is useful after it becomes useful. Figuring out what you need in advance is I think, one of the biggest problems with this thing. You don’t want to put too much in. On the other hand, if you put too little in you don’t know whether or not it is useful.” As a result, your team will need to do lots of experimenting to discover the right process and the right balance. 

Daoudi added that it’s also important to answer the question of why you’re doing observability before looking into products. “Like every new thing that when a company goes and decides to implement something, you start with why? Why do you need to implement observability? Why do you need to implement SREs? Why do you need to implement an HR system? If you don’t define the ‘why’ then what typically happens is first it’s a huge distraction to your company and also a lot of resources being wasted and then the end result might not be what you’re looking for,” he said.  

And of course, it’s important to remember that observability is more of a process, so looking for a tool that will do observability for you won’t work. The tooling is really about analytics on the observability data you’ve gathered. 

“I really don’t think observability is a tool,” said Daoudi. “If there was such a thing as go to Best Buy, aisle 5, or Target, or Walmart and buy an observability tool for like $5 million, it ain’t going to work because if your company is not functioning and aligned, and your processes and everything isn’t aligned around what observability is supposed to do, then you’re just going to have shelfware in your company.”

The post Observability: A process change, not a set of tools appeared first on SD Times.

Read more:


Mobile security lessons learned from mobile game cheats

Mobile games are often broken into so users can access premium content, paid features and obtain in-game currency. This is done by tampering with memory, bypassing payments and touchID screens, and downloading paid apps for free — and can be done on both jailbroken or non-jailbroken devices. 

In a recent webinar on SD Times, Jan Seredynski, mobile security researcher and pentester at the mobile application protection company Guardsquare, walked attendees through these game cheats and provided four simple tips on how to prevent them. According to Seredynski, these lessons learned from mobile game cheats can be applied to all aspects of mobile application security from healthcare, e-commerce, banking and more.

Seredynski’s four simple tips are: 


Environment integrity: Detecting a compromised environment for example a jailbroken/rooted device, emulated app or system or the presence of a debugger.
Application integrity: Verifying that the user is running the current version of the application, that application resources haven’t been changed, and that the application has been installed from a legitimate source
Code integrity: Verifying if execution code is identical to the developer code. For instance, ensuring that machine code or Java instructions haven’t been changed. 
Obfuscation: Making it harder for attackers to understand your code by renaming variables, methods or classes; encrypting sensitive strings; complication control-flow; and encryption assets. 




Each of these protection components matter, Seredynski explained. If you don’t have environment integrity, a hacker can bypass the application integrity; if you don’t have application integrity, a hacker can imitate the compromised environment; without code integrity, a hacker can overwrite protection code; and without obfuscation, a hacker can easily find relevant functions.  

Some other tips Seredynski suggested are to regularly update your protection code so that hackers don’t have enough time to understand those protections, make sure protections work on all operating systems and versions, and check for false positives. 

To learn more and see Seredynski’s do-it-yourself steps on how you can protect your application through each of the four components, watch the full webinar here

The post Mobile security lessons learned from mobile game cheats appeared first on SD Times.

Read more:


SD Times news digest: RubyMine 2021.1 released, ShiftLeft CORE, and GrammaTech CodeSonar update

JetBrians announced the latest release of its Ruby on Rails IDE. RubyMine 2021.1 now supports RBS and uses .rbs files in its code insight for improved code completion capabilities. 

Users can also now connect RubyMine to the organization in Space where they can view and clone project repositories, review teammates’ code and write Space Automation scripts. 

Additional details on all of the UI and UX improvements and new features for working with web technologies and frameworks, version control systems and JSON are available here.

ShiftLeft CORE announced
ShiftLeft CORE is a new unified code security platform powered by ShiftLeft’s Code Property Graph (CPG). 

The platform contains NextGen Static Analysis (NG SAST), a modern code analysis solution for developers to find and fix vulnerabilities in their IDE, as well as ShiftLeft’s Intelligent SCA and ShiftLeft Educate, which offers context-sensitive security training for developers. 

“Organizations today don’t have a problem finding vulnerabilities; the challenge is prioritizing and fixing the ones they already have without sacrificing speed in the development process,” said Chetan Conikee, CTO, ShiftLeft. “The groundbreaking features we’re offering in the ShiftLeft CORE platform are designed to address this new dynamic, and turn application security into a business advantage for our customers.”

GrammaTech CodeSonar update
GrammaTech CodeSonar 6.0 provides a deeper integration of SAST within DevOps pipelines.

The new version features an integrated visual representation of selected code for improved remediation of defects that eliminates the need for a developer interface and built in detection, alerts and reporting of Top 10 OWASP risks

It also includes GitLab integration and additional language and compiler support requested by 500 plus GrammaTech customers to support their transition to DevSecOps practices. 

Postman announces unlimited API collaboration for up to three team members
Teams of up to three members can now have unlimited shared workspaces and unlimited shared requests at no cost. 

“Postman is committed to helping everyone work with APIs more easily, and this new enhancement is a key part of that effort, eliminating a cost barrier and enabling small teams to take full advantage of the platform’s API collaboration capabilities,” Postman wrote in a post.

Additional collaboration details for each Postman plan are available here.

The post SD Times news digest: RubyMine 2021.1 released, ShiftLeft CORE, and GrammaTech CodeSonar update appeared first on SD Times.

Read more:


Checkov 2.0 now available with new Dockerfile scanner

Bridgecrew has announced the release of Checkov 2.0. Checkov is a static code analysis tool specifically designed for Infrastructure as Code (IaC). 

“Policies that take into account interdependencies within IaC are critical to understanding the impact of misconfigurations,” said Rob Eden, senior engineer and Checkov contributor. “It’s not enough to know that a security group has ports open to the world; we need to know if that misconfiguration is in production or just a test environment in order to prioritize it appropriately. It’s awesome to have an open-source tool providing that level of context.”

Key additions in Checkov 2.0 include 250 new policies, Dockerfile scanning to secure container build tasks, and graph-based mapping. 

RELATED CONTENT: 5 ways static code analysis can save you

Checkov first launched in 2019, and since then has helped developers identify misconfigurations in their IaC frameworks like Terraform, CloudFormation, Kubernetes, Azure Resource Manager (ARM), and Serverless Framework. 

“This release is the most significant update to Checkov since it launched early last year,” said Matt Johnson, developer advocate at Bridgecrew. “Dependency awareness means developers have even more context earlier in the development lifecycle, helping companies around the world better secure their cloud infrastructure.”

The post Checkov 2.0 now available with new Dockerfile scanner appeared first on SD Times.

Read more:


SD Times Open-Source Project of the Week: C# standardization

Microsoft has announced C# standardization is now been open sourced. The work for C# standards will now happen in the open under the .NET Foundation while the ECMA C# standards committee, TC-49-TG2 is still responsible for creating the proposed standard for the C# language.

“Moving the standards work into the open, under the .NET Foundation, makes it easier for standardization work,” Bill Wagner, a principal content developer at Microsoft wrote in a blog post. “Everything from language innovation and feature design through implementation and on to standardization now takes place in the open. It will be easier to ask questions among the language design team, the compiler implementers, and the standards committee. Even better, those conversations will be public.”

Preceding the recent open sourcing of the standardization, the C# compilers have been open source since 2014 and are now in the dotnet/roslyn repository. After that the dotnet/csharplang split off to provide a dedicated space for innovation in the language. The recent announcement of dotnet/csharpstandard now completes the group and creates a third repository related to the C# language. 

The community can now see work in progress on the standard text for C#6 and C#7. Microsoft said that soon issues in dotnet/csharplang and dotnet/docs for the spec text will move to the new standards repository. 

Also, the C# spec on will be replaced with the version from the standards committee and the C#6 draft spec will be removed from the dotnet/csharplang repo.

The post SD Times Open-Source Project of the Week: C# standardization appeared first on SD Times.

Read more:


SD Times news digest: .NET 6 Preview 3, WhiteSource announces new funding for app security, and Canonical adds full enterprise support for Kubernetes 1.21

.NET 6 Preview 3 now available. The platform matrix of .NET 6 was significantly expanded as of the new preview with the addition of Android, iOS, Mac and Mac Catalyst for x64 and Apple Silicon and Windows Arm64. 

The CollectionsMarshal.GetValueRef API was added to make updating struct values in Dictionaries faster and is intended for high-performance scenarios. 

Preview 3 also contains changes that improve code generation in RyuJIT and Interface casting performance has been boosted by 16% – 38%.

Additional details on the new release are available here.

WhiteSource announces $75 million in new funding for app security
The open-source security and management company WhiteSource announced that it raised $75 million in Series D funding led by Pitango Growth, bringing the total funding to $121.2 million. 

“Application security needs have gone beyond just detection to include continuous prioritization and prevention, as demonstrated by recent software supply chain attacks,” said Rami Sass, Co-Founder and CEO of WhiteSource. “This investment brings us closer to creating a future where the cycle of application delivery is always a step ahead of any security risk, and where developers are easily equipped with code they can trust.”

WhiteSource provides its remediation-centric solution to more than 800 companies globally and helps organizations better protect their software applications without affecting the speed of software delivery or performance. 

Canonical announces full enterprise support for Kubernetes 1.21
Canonical announced full enterprise support for Kubernetes 1.21 from the cloud to edge. 

“Canonical Kubernetes is about removing complexity around Kubernetes operations from cloud to edge. We bring certified Kubernetes distributions to allow users to bootstrap their Kubernetes journey, as well as a large tooling ecosystem and automation framework combination, for businesses to reap the K8s benefits and focus on innovation in the growing cloud-native landscape. Our users benefit from the latest features of Kubernetes, as soon as they become available upstream,” said Alex Chalkias, product manager for Kubernetes at Canonical.

Canonical also said that it commits to supporting N-2 releases as well as providing extended security maintenance (ESM) and patching for N-4 releases in the stable release channel.

Additional details are available here.

Gitpod announces new funding
Gitpod announced that it raised $13 million in new funding, as well as new product features and the first DevX Conf, which is focused on improving developer experiences. 

“With VS Code in Gitpod developers get the most popular editing experience combined with all the benefits of a fully automated, cloud-based solution,” Gitpod wrote in a post.

Also, with Docker support and sudo privileges, developers can run containers within their workspace. 

The post SD Times news digest: .NET 6 Preview 3, WhiteSource announces new funding for app security, and Canonical adds full enterprise support for Kubernetes 1.21 appeared first on SD Times.

Read more:


NativeScript 8.0 launches with new Best Practices Guide

The latest version of the NativeScript framework is now available. NativeScript 8.0 features more streamlining of the core of the framework so that it can serve as a good foundation for future enhancements, as well as the release of a new Best Practices Guide.

According to the team, the previous release last Fall, NativeScript 7.0, was one of the largest structural updates and set the foundation for this release and other future releases. 

“We want to thank the incredible community for their input and support. The contributions provided during this cycle were astounding and you make working on NativeScript an absolute joy. In addition, the tremendous love you have shown on Open Collective, each and every contribution has helped make 8.0 a reality as well as paved the way for more exciting things to come,” the the NativeScript Technical Steering Committee wrote in a post.

Key new features in NativeScript 8.0 include: 

Apple M1 support
Accessibility support
CSS box-shadow support
CSS text-shadow support
A hidden binding property
An official eslint package
support for creative view development using the new RootLayout container

In addition to the first official NativeScript Best Practices Guide, the team also gave the NativeScript website and documentation a refresh.

“Over the years several distinct best practices have emerged when working with NativeScript and we took a moment to outline a few of the most fundamental ones you should be aware of to get the best end result out of your projects,” the Technical Steering Committee wrote.

The framework has two major versions released every year, to align with platform tooling updates. The next major release is expected in the Fall. 

More information can be found in the release notes, including details on how to upgrade. 

The post NativeScript 8.0 launches with new Best Practices Guide appeared first on SD Times.

Read more:


4 reasons the future of cloud-native software is open source

Over the last several years, cloud-native development has transformed the way we think about software development. To speed up release cycles, build more powerful applications, and deliver superior user experiences at scale, more and more dev teams are embracing this modern approach to software development and building applications entirely in the cloud. 

According to the Cloud Native Computing Foundation (CNCF), there are at least 6.5 million cloud-native developers on the planet today, quite the increase from 4.7 million cloud-native developers that existed in Q2 2019. It’s all but certain that this number will continue to increase as we move further into the future.

With the increase in cloud-native developers, it comes as no surprise that more and more organizations are embracing cloud-native applications. One December 2020 study, for example, found that 86% of organizations were using cloud-native apps.

RELATED CONTENT: GitOps: It’s the cloud-native way

Similarly, though enterprises have long been wary of investing in open source applications, that’s all changing, too. According to Red Hat’s 2020 State of Enterprise Open Source Report, enterprises are increasingly investing in open source solutions. In fact, 95% of survey respondents say that open source is “strategically important” to their overall software strategy.

Add it all up, and the writing’s on the wall: If the future of software is cloud-native, it follows that the future of cloud-native is open source. Here are four reasons why. 

1. Community
The open-source community is vibrant, filled with developers from all walks of life who live all around the world. When you invest in the right open-source tools, not only do you gain access to the software itself, you also can leverage a diverse, global community of committed developers who are eager to help you through problems and troubleshoot issues. At the same time, it’s not uncommon for community members to add new features, build new integrations, or conduct security audits looking for vulnerabilities.

On the flipside, open source and open-core companies that shepherd open-source projects experience the same kinds of benefits. Not only can the community help them build a better, more secure, more feature-rich product, they can also help promote it to folks around the world. 

2. Freedom from vendor lock-in
According to the Flexera 2020 CIO Priorities Report, more than two-thirds of CIOs are concerned about getting locked in to cloud providers. 

This is another main driver of open-source adoption. Since open source solutions ship with open standards and full access to source code, enterprises are able to take hold of their own destiny instead of crossing their proverbial fingers and hoping that the vendor’s roadmap aligns with the interests of their business over the long term.

Simply put, open source enables organizations to avoid getting locked into any one vendor — and, by extension, getting coerced into paying hefty licensing fees for the foreseeable future.

3. Customizability
In addition to helping you avoid vendor lock-in, open-source solutions are highly customizable. It’s not uncommon for leading open-source solutions to have hundreds of integrations, built by both the open-source community and the open core company behind the project.

This is a huge deal. No two organizations are the same. Yet when an enterprise invests in a proprietary solution, they aren’t given access to source code and can’t reconfigure the software to meet their unique needs. Of course, some software vendors offer native integrations out of the box. But unless your team uses the tools the vendor supports and nothing else, chances are there will be at least one or two integrations on your wishlist.

When you go the open-source way, you control your own future. Your dev team can build whatever integrations they’d like. They can also fork the entire project and take it in an entirely new direction — one that makes it much easier to meet their objectives.

4. Security and control
In the age of high-profile data breach after high-profile data breach, security is more important than ever before. When you think about regulations and consumer protection laws like GDPR and CCPA — and the resulting potential penalties for non-compliance — the importance of security compounds even further.

There used to be a common misconception that proprietary software was inherently more secure than open source solutions because its source code was hidden from the public and, as such, was harder for bad actors to exploit.

But that misconception has evaporated in recent years. The fact of the matter is that — when you invest in proprietary tools — you’re essentially outsourcing your security stance to the vendor, trusting them that their software is secure.

By providing full access to source code and the ability to configure and extend the software however you like, open source enables organizations to take complete control over their security needs. In today’s day and age, this benefit can’t be understated.

Is your enterprise ready for the future?
Cloud-native solutions are the future of software because they enable organizations to unlock the true promise of the cloud. But in order to truly do that, software needs to be open source. Not only does open-source software give organizations access to powerful communities of contributors, it also lets them build the perfect tool for the job while retaining complete control over their security requirements.

To learn more about the transformative nature of cloud-native applications and open source software, check out KubeCon / CloudNativeCon Europe 2021, a virtual event hosted by the Cloud Native Computing Foundation, which takes place May 4–May 7. For more information or to register for the event, go here.

The post 4 reasons the future of cloud-native software is open source appeared first on SD Times.

Read more:


Passing the test of complex technologies

Seemingly small technological failings can have enormous consequences. When they are strung together in complex systems, the results can be catastrophic.  

In 1996, Europe’s then-newest unmanned satellite-launching rocket, the Ariane 5, exploded just seconds after taking off on its maiden flight from French Guiana. Onboard was a $500 million set of four scientific satellites created to study how the Earth’s magnetic field interacts with solar winds.

According to the New York Times Magazine, the rocket’s self-destruction was triggered when its guidance computer tried to convert a 64-bit floating-point number concerning the rocket’s lateral velocity into a 16-bit integer, resulting in an overflow error that shut the guidance system down. It then passed control to an identical backup computer, but the second computer had also failed the same way at the same time because it was running the same software.

Three years later, NASA’s $125 million Mars Climate Orbiter burned up in the Martian atmosphere due to flawed assumptions about the conversion of acceleration data between metric and English units by the ground crew and the software onboard the spacecraft. What was intended to be a day of celebration of the craft’s arrival into Mars orbit turned out quite differently due to this misunderstanding regarding units of measurement.

Disasters like these involve multiple failings – of design, validation, and interactions of humans with one another and with the system. And deficiencies in these same categories lead to system shortcomings of lesser magnitude but higher frequency that affect many of us every day in one way or another. Interoperability validation is a particular concern as software-centric systems become more numerous and complex. When devices using different technologies, or even the same basic technology implemented differently, are combined into a single system, they need to be seamlessly interoperable. When they are not – when they prove incompatible – negative consequences large and small usually follow. There is tension here for developers who are striving for performance improvements and competitive advantages for their products. As technologies continue to evolve, compatibility issues create a rolling challenge. Standards are key to striking the right balance and promoting the development of ecosystems that serve customers well.

There is no doubt that the widespread adoption of software-centric systems has already yielded a host of benefits. It is changing the speed at which enterprises innovate, grow, and support their customers. It raises productivity, reduces time to market, and fulfills customer demands by leveraging information collected digitally. Combined with advanced analytics and data visualization, that information provides the insights needed for optimizing customer experience with both current products and solutions, and those still under development. Applied together with advanced hardware technology, advanced software technology is fundamental for accomplishing the digital transformation that many organizations are currently working to achieve. And speaking of transformation, one need only look to recent videos of the Perseverance rover successfully landing on Mars to see how much has changed in the U.S. space program since the Mars Climate Orbiter experience.

The challenge is to find new approaches for testing those advanced technologies, approaches that can efficiently reveal potential problems with performance, user experience, security, and interoperability. All of this needs to be done in real-world conditions before the technologies become embedded into devices and deployed in the field.  

Addressing this challenge requires a corresponding emphasis on software-centric design and test solutions. With so much data being produced by the systems under test, obtaining raw test results alone is not enough. Sophisticated analysis and insightful interpretation of those results are critical, particularly for research and development. Automation capabilities are required to increase productivity and ensure consistency of test steps across units and under different operating conditions. As performance limits are pushed to new heights, simulation is more important than ever to gain confidence in a design prior to prototype fabrication. 

Information security continues to grow in importance for the design and test of today’s products and solutions. With new generations of malware and those who seek to apply them now on the attack 24/7, security considerations cannot be left until deployment time – they must be addressed early in the design and increasingly in the hardware in addition to the software. One need only consider end applications in the financial, utility, communications, national defense, and transportation sectors to realize the importance of keeping the systems secure, and the potential consequences of failing to do so. 

The transformation of the automobile and the associated infrastructure provides a good example of these challenges in action. The latest vehicles feature a staggering amount of new hardware and software technology, enabling everything from the powertrain, to the Advanced Driver-Assistance Systems (ADAS), to the progressing levels of autonomous driving and more. New technology for vehicle-to-everything communications, or V2X, will enable vehicles to communicate with each other and elements of the traffic system, including roadside infrastructure, bicyclists, and pedestrians. If it succeeds, according to the U.S. Department of Transportation, V2X can either eliminate or reduce the severity of up to 80 percent of auto accidents. It can also dramatically reduce travel times and slash fuel consumption. But it is complicated technology. Accounting for traffic patterns, adjusting to road conditions, responding to risks outside of normal sightlines, and recognizing other driving hazards is a complex undertaking.  

To help ensure V2X’s success and the success of complex new software-centric systems generally, the design and test industry is applying its own innovations – in richer system modeling, high-frequency, high protocol content communications, intelligent automated software tests, and advanced manufacturing solutions. We are also partnering deeply with market-leading customers to co-innovate at their pace while also working to advance standards development so that new technology ecosystems develop quickly, safely, and cost-effectively for customers. It is exciting work, and as it is said, “failure is not an option.”

The post Passing the test of complex technologies appeared first on SD Times.

Read more: