Categories
Software

Why and how financial institutions are modernizing testing

Today financial services firms face unprecedented consumer expectations. The pressure is on to produce always-on apps that deliver whenever and however users need it. 

Organizations are responding to this pressure by choosing to focus on DevOps, agile development and other initiatives that can accelerate time to market, boost performance and improve efficiency. Those producing the best outcomes, though, have one additional thing in common. They are modernizing testing. 

Why is test modernization so important? Just consider the experiences of one, large financial services enterprise, who had adopted proprietary test tools that were costly to maintain. Test scripts were developed manually, resulting in long delays as development teams waited for code to be tested. There simply was no way to integrate and deliver code continuously at speed without sacrificing quality. 

The dilemma is all too common. Application testing simply hasn’t kept pace with market demands. Many organizations are locked into complex, proprietary tools that require specialized knowledge and lack the scale and flexibility needed to deliver continuous, accelerated test coverage. Instead, they conduct tests only after design and development are complete – often with less than total coverage. 

According to Andrea Cabeça, Executive Superintendent at Bradesco Bank, legacy systems can limit modernization programs. As an example, developing shorter test cycles can be a challenge when dealing with mainframe systems that take 20 hours to run transactions. Regulatory demands on the financial industry add a further layer of complexity- you need to transform and give space for your team to be creative and try new things, while staying within those regulatory demands that limit you. 

There is also the problem of adapting new tools to manage the old legacy world,  according to Cabeça. Although financial teams want to embrace collaborative work, unit tests, and more, these new steps don’t always work with old monolith systems. Making a push to go cloud first, can add challenges of its own.

An automated alternative
Forward-thinking financial services teams are now modernizing testing to break through such barriers. They are moving to an open-source, technology-agnostic test automation framework able to span the entire DevOps lifecycle. 

These new platforms transform and democratize testing. With an automated framework that is easy to use, anyone in the application delivery chain can run SaaS-based, open-source tests at any time, from anywhere. All it takes is a software browser. You can validate performance at every stage of the application lifecycle – from product strategy and code development to delivery and production. 

As a result, you have the data you need to determine whether the work you are doing is moving the needle on key business strategies. You can establish a continuous feedback loop that improves quality, drives higher levels of customer satisfaction and builds a better bottom line. 

The business case for modernization
As you might expect, financial institutions take a no-nonsense approach to technology investments, and they’ve found the business case for continuous, automated testing is compelling. 

A report from industry analysts at Forrester explores the total economic impact experienced by five companies making the move to an enterprise-ready, open-source testing framework. Each has adopted a solution that supports continuous delivery, providing 100 percent test coverage at speed. They can automate and standardize end-to-end performance testing and conduct load testing at scale.

Analysts found that over a three-year period, the companies experienced a 207 percent return on investment, realized a net present value of $2.6 million, and produced almost $4 million in operating savings and other benefits. 

By diving deep into each company’s experiences, analysts identified the source of these significant, bottom-line benefits. A few examples: 

A 10 percent improvement in developer efficiency
With a testing platform that supports shift left, developers can test during sprints instead of weeks after the fact. If bugs are found, they don’t waste time reacquainting themselves with the code they’ve written or the use case they’re addressing. Instead they can quickly isolate and fix issues and can build quality in from the ground up. 

Forrester found that testing during development saved a half-day of developer time for every 40 hours worked. Unplanned work was reduced by 28 percent, and team members spent half as much time on test case design.

A 10-fold improvement in application performance
With automation, teams can test more frequently at every stage in the development lifecycle – before new applications and updates hit production and are encountered by real customers. Catching errors and resolving them earlier makes a big impact. Forrester found nearly 40 percent of the financial benefits realized with test automation were linked to these significant application performance improvements.

One company said it had eliminated the spike in call center traffic typical during new software releases as customers reported problems. Another reported application load time improved by 10 to 15 percent. Yet another said software availability had improved from “three nines” to “four nines.”

A $300K annual reduction in operating costs
Legacy testing platforms based on proprietary technology are expensive to operate. They come with a high initial price tag and require costly ongoing maintenance. Forrester found that teams making the move to open-source testing were able to eliminate licensing fees and costly upgrades, saving hundreds of thousands of dollars each year.

Faster time to market
The study shows that test automation accelerates the development DevOps lifecycle and enables organizations to release new applications and updates much faster than before. As a result, adopters were poised to accelerate growth and to move more quickly into new markets. 

Improved strategic alignment 
An automated test framework made it possible to achieve new levels of transparency and to keep all stakeholders aligned – from product strategy teams and developers to operations and customer service teams. Automated reporting tools made it easy to monitor critical metrics, identify trends and determine how various releases were faring before and after launch. As a result, team members had the information needed to ensure their efforts were effectively supporting important corporate initiatives.

Join the test automation revolution 
If your testing tools have become a bottleneck, it’s time to adopt a modern, open-source framework. When you do, you will be poised to broaden your test program and to keep up with the fast-paced demands of today’s marketplace.

 

Content provided by Broadcom

The post Why and how financial institutions are modernizing testing appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

SD Times news digest: Grafana Cloud unveils free plan, Cockroach Labs’ $160 million funding, and Blueprint launches RPA platform migration

Grafana announced a new free plan that gives users access to Prometheus and Graphite for metrics, Loki for logs, and Tempo for tracing integrated into Grafana. 

“With Grafana Cloud, you get a service managed by the maintainers of these leading open source projects, whose deep knowledge allows us to run them efficiently at scale better than any other company in the world,” Richard Lam, a senior product manager at Grafana Labs wrote in a blog post.

The paid Grafana Cloud plan was also upgraded to include new features and five time more metrics. 

Cockroach Labs announces $160 million funding
Cockroach Labs wrote that it plans to use the funds for further product development for its cloud-native, distributed SQL database and to expand its staff. 

The latest financing was led by Altimeter Capital with participation from new investors such as Greenoaks and Lone Pine, and many existing investors. 

The company also announced a free version of Cockroach Cloud for development and education that will soon be released in beta.

Additional details are available here.

Blueprint launches RPA platform migration
The new robotic process automation (RPA) platform enables companies to quickly switch from one RPA tool to another, which was previously constrained by many factors such as code parity, lost credentials, absent versioning, and more, according to Blueprint in a post.

The solution takes in bots from any leading RPA tools and then creates a digital blueprint that can be pushed into other RPA platforms. The blueprints also enable automated processes to connect to relevant dependencies, systems, and constraints.

“Our technology reduces the complexities and cost of shifting RPA tools to near zero, allowing companies to pick the platform that’s best for them, regardless of how much they’ve already developed on a competing platform,” said Dan Shimmerman, the president and CEO of Blueprint. “This ultimately pushes RPA tools down the value stack, commoditizing automation and execution platforms, so companies can choose the vendor that offers the best price and value and simply switch.”

Linux Foundation launches open source management and strategy training program
The new program consists of seven modular courses that teach the basic concepts for building effective open source practices within organizations. 

The courses were to designed to be “reasonably high-level,” and detailed enough to help open source users to implement these concepts quickly, according to the Linux Foundation 

“Organizations must prepare their teams to use [open source] properly, ensuring compliance with licensing requirements, how to implement continuous delivery and integration, processes for working with and contributing to the open source community, and related topics,” said Chris Aniszczyk, the co-founder of the TODO Group and VP of Developer Relations at The Linux Foundation. “This program provides a structured way to do that which benefits everyone from executive management to software developers.”

The post SD Times news digest: Grafana Cloud unveils free plan, Cockroach Labs’ $160 million funding, and Blueprint launches RPA platform migration appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

SD Times Open-Source Project of the Week: AlmaLinux

AlmaLinux is an enterprise-grade server OS that CloudLinux released to replace CentOS. According to the company, it will serve as a free Linux OS for the community, and be ready within the first quarter of this year. 

Right after Red Hat announced that it’s CentOS stable release is no longer under development last month, CloudLinux launched the project for the replacement, code-named Project Lenx. This resulted in the 1:1 binary compatible fork of RHEL 8 now named AlmaLinux. 

Now, users of CentOS can switch to AlmaLinux through one command without any switching downtime. 

“The demise of the CentOS stable release left a very large gap in the Linux community which prompted CloudLinux to step in and launch a CentOS alternative,” said Igor Seletskiy, the CEO and founder of CloudLinux Inc. “For CloudLinux it was an obvious move: the Linux community was in need, and the CloudLinux OS is a CentOS clone with significant pedigree – including over 200,000 active server instances. AlmaLinux is built with CloudLinux expertise but will be owned and governed by the community.” 

To bolster the community that is governing ongoing development efforts, CloudLinux is committed to supporting it for the next eight years with $1 million annual investment, according to the AlmaLinux website.

The post SD Times Open-Source Project of the Week: AlmaLinux appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

SD Times news digest: Harness reaches $1.7 billion valuation, Dynatrace integrates with Snyk Intel data, and WhiteSource expands native support for IDEs

Software delivery platform Harness announced that it will use its recent $115 million in funding to grow its engineering team, support global expansion plans, and extend its intelligent software delivery platform vision.

Harness provides an end-to-end platform for intelligent software delivery that implements machine learning to detect the quality of deployments. 

“Our goal is to create an intelligent software delivery platform that allows every company in the world to become as good in software delivery as the likes of Google and Facebook,” said Jyoti Bansal, the CEO and co-founder of Harness.

Dynatrace integrates real-time vulnerability detection with Snyk Intel data
Dynatrace’s Application Security Module now links the vulnerabilities that it finds to the Snyk Intel database of open-source vulnerabilities.

“We built the Dynatrace platform to provide continuous automation and intelligence for dynamic, cloud-native environments. Extending it to application security, and enabling production detection in dynamic environments, was a natural step,” said Bernd Greifeneder, the founder and CTO of Dynatrace.

Dynatrace Application Security is also optimized for Kubernetes architectures and DevSecOps approaches. 

WhiteSource expands native support for IDEs
The new integrations for JetBrains Pycharm and Webstorm provide real-time visibility and control on open-source components for developers in their preferred IDEs. 

With the new Pycharm and Webstorm additions, WhiteSource now supports six popular environments that also include JetBrains IntelliJ, Visual Studio, Visual Studio Code, and Eclipse.

“These integrations empower developers to address open source security issues very early in the development process and resolve them easily, shortening release cycles, and saving valuable time and resources,” WhiteSource wrote in an announcement.

Xamarin.Forms 5.0 released
The latest major release includes quality improvements and stable release of new features such as App Themes, Brushes, CarouselView, RadioButton, Shapes and Paths, and SwipeView.

Visual Studio 2019 is the minimum version required for the new Xamarin.Forms, and Microsoft encourages those who will update to remove DataPages and Theme packages from their solutions. Additional details on the best way to migrate are included here.

Xamarin.Forms 5.0 will continue to receive service releases through November 2022, Microsoft stated. 

The post SD Times news digest: Harness reaches $1.7 billion valuation, Dynatrace integrates with Snyk Intel data, and WhiteSource expands native support for IDEs appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

The Open Testing Platform

This is a rather unique time in the evolution of software testing.  Teams worldwide are facing new challenges associated with working from home. Digital transformation initiatives are placing unprecedented pressure on innovation.  Speed is the new currency for software development and testing. The penalty for software failure is at an all-time high as news of outages and end-user frustration go viral on social media. Open-source point tools are good at steering interfaces but are not a complete solution for test automation.

Meanwhile, testers are being asked to do more while reducing costs.

Now is the time to re-think the software testing life cycle with an eye towards more comprehensive automation. Testing organizations need a platform that enables incremental process improvement, and data curated for the purpose of optimizing software testing must be at the center of this solution. Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.   

What is an Open Testing Platform?
An Open Testing Platform (OTP) is a collaboration hub that assists testers to keep pace with change. It transforms observations into action – enabling organizations to inform testers about critical environment and system changes, act upon observations to zero in on ‘what’ precisely needs to be tested, and automate the acquisition of test data required for effective test coverage.

RELATED CONTENT:
Testing tools deliver quality – NOT!
The de-evolution of software testing

The most important feature of an Open Testing Platform is that it taps essential information across the application development and delivery ecosystem to effectively test software. Beyond accessing an API, an OTP leverages an organization’s existing infrastructure tools without causing disruption—unlocking valuable data across the infrastructure. An OTP allows any tester (technical or non-technical) to access data, correlate observations and automate action. 

Model in the middle
At the core of an Open Testing Platform is a model. The model is an abstracted representation of the transactions that are strategic to the business. The model can represent new user stories that are in-flight, system transactions that are critical for business continuity, and flows that are pivotal for the end-user experience.

In an OTP, the model is also the centerpiece for collaboration. All tasks and data observations either optimize the value of the model or ensure that the tests generated from the model can execute without interruption.  Since an OTP is focused on the software testing life cycle, we can take advantage of known usage patterns and create workflows to accelerate testing. For example, with a stable model at the core of the testing activity:

  The impact of change is visualized and shared across teams
  The demand for test data is established by the model and reused for team members
  The validation data sets are fit to the logic identified by the model
  The prioritization of test runs can dynamically fit the stage of the process for each team, optimizing for vectors such as speed, change, business-risk, maintenance, etc.

Models allow teams to identify critical change impacts quickly and visually. And since models express test logic abstracted from independent applications or services, they also provide context to help testers collaborate across team boundaries.

Data curated for testing software
Automation must be driven by data. An infrastructure that can access real-time observations as well as reference a historical baseline is required to understand the impact of change. Accessing data within the software testing life cycle does not have to be intrusive or depend on a complex array of proprietary agents deployed across an environment. In an overwhelming majority of use cases, accessing data via an API provides enough depth and detail to achieve significant productivity gains.  Furthermore, accessing data via an API from the current monitoring or management infrastructure systems eliminates the need for additional scripts or code that require maintenance and interfere with overall system performance.

 Many of the data points required to optimize the process of testing exist, but they are scattered across an array of monitoring and infrastructure management tools such as Application Performance Monitoring (APM), Version Control, Agile Requirements Management, Test Management, Web Analytics, Defect Management, API Management, etc.

An Open Testing Platform curates data for software testing by applying known patterns and machine learning to expose change. This new learning system turns observations into action to improve the effectiveness of testing and accelerate release cycles. 

Why is an Open Testing Platform required today?
Despite industry leaders trying to posture software testing as value-added, the fact is that an overwhelming majority of organizations identify testing as a cost center. The software testing life cycle is a rich target for automation since any costs eliminated from testing can be leveraged for more innovative initiatives.

If you look at industry trends in automation for software testing, automating test case development hovers around 30%.  If you assess the level of automation across all facets of the software testing life cycle, then automation averages about 20%.  This low average automation rate highlights that testing still requires a high degree of manual intervention which slows the software testing process and therefore delays software release cycles.

But why have automation rates remained so low for software testing when initiatives like DevOps have focused on accelerating the release cycle? There are four core issues that have impacted automation rates:

  Years of outsourcing depleted internal testing skills
  Testers had limited access to critical information
  Test tools created siloes
  Environment changes hampered automation

Outsourcing depleted internal testing skills
The general concept here is that senior managers traded domestic, internal expertise in business and testing processes for offshore labor, reducing Opex . With this practice known as labor arbitrage, an organization could reduce headcount and shift the responsibility for software testing to an army of outsourced resources trained on the task of software testing. This shift to outsourcing had three main detrimental impacts to software testing: the model promoted manual task execution, the adoption of automation was sidelined and there was a business process “brain-drain” or knowledge drain. 

 With the expansion of Agile and the adoption of enterprise DevOps, organizations must execute the software testing life cycle rapidly and effectively. Organizations will need to consider tightly integrating the software testing life cycle within the development cycle which will challenge organizations using an offshore model for testing.  Team must also think beyond the simple bottom-up approach to testing and re-invent the software testing life cycle to meet increasing demands of the business. 

Testers had limited access to critical information 
Perhaps the greatest challenge facing individuals responsible for software testing is staying informed about change. This can be requirements-driven changes of dependent application or services, changes in usage patterns, or late changes in the release plan which impact the testers’ ability to react within the required timelines. 

Interestingly, most of the data required for testers to do their job is available in the monitoring and infrastructure management tools across production and pre-production. However, this information just isn’t aggregated and optimized for the purpose of software testing. Access to APIs and advancements in the ability to manage and analyze big data changes this dynamic in favor of testers. 

Although each organization is structurally and culturally unique, the one commonality found among Agile teams is that the practice of testing software has become siloed. The silo is usually constrained to the team or constrained to a single application that might be built by multiple teams. These constraints create barriers since tests must execute across componentized and distributed system architectures.

Ubiquitous access to best-of-breed open-source and proprietary tools also contributed to these silos. Point tools became very good at driving automated tests. However, test logic became trapped as scripts across an array of tools. Giving self-governing teams the freedom to adopt a broad array of tools comes at a cost:  a significant degree of redundancy, limited understanding of coverage across silos, and a high amount of test maintenance. 

The good news is that point tools (both open-source and proprietary) have become reliable to drive automation. However, what’s missing today is an Open Testing Platform that assists to drive productivity across teams and their independent testing tools.  

Environment changes hampered automation
Remarkably, the automated development of tests hovers at about 30% but the automated execution of tests is half the rate at 15%. This means that tests that are built to be automated are not likely to be executed automatically – manual intervention is still required. Why? It takes more than just the automation to steer a test for automation to yield results. For an automated test to run automatically, you need: 

  Access to a test environment
  A clean environment, configured specifically for the scope of tests to be executed
  Access to compliant test data
  Validation assertions synchronized for the test data and logic

 As a result, individuals who are responsible for testing need awareness of broader environment data points located throughout the pre-production environment. Without automating the sub-tasks across the software testing life cycle, test automation will continue to have anemic results.

An Open Testing Platform levels the playing field 
Despite the hampered evolution of test automation, testers and software development engineers in test (SDETs) are being asked to do more than ever before. As systems become more distributed and complex, the challenges associated with testing compounds. Yet the same individuals are under pressure to support new applications and new technologies – all while facing a distinct increase in the frequency of application changes and releases. Something has got to change. 

An Open Testing Platform gives software testers the information and workflow automation tools to make open-source and proprietary testing point tools more productive in light of constant change.  An OTP provides a layer of abstraction on top of the teams’ point testing tools, optimizing the sub-tasks that are required to generate effective test scripts or no-code tests. This approach gives organizations an amazing degree of flexibility while significantly lowering the cost to construct and maintain tests.

 An Open Testing Platform is a critical enabler to both the speed and effectiveness of testing.  The OTP follows a prescriptive pattern to assist an organization to continuously improve the software testing life cycle.  This pattern is ‘inform, act and automate.’ An OTP offers immediate value to an organization by giving teams the missing infrastructure to effectively manage change. 

The value of an Open Platform

Inform the team as change happens
What delays software testing? Change, specifically late changes that were not promptly communicated to the team responsible for testing. One of the big differentiators for an Open Testing Platform is the ability to observe and correlate a diverse set of data points and inform the team of critical changes as change happens. An OTP automatically analyzes data to alert the team of specific changes that impact the current release cycle.

 Act on observations

Identifying and communicating change is critically important, but an Open Testing Platform has the most impact when testers are triggered to act. In some cases, observed changes can automatically update the test suite, test execution priority or surrounding sub-tasks associated with software testing. Common optimizations such as risk-based prioritization or change-based prioritization of test execution can be automatically triggered by the CI/CD pipeline.  Other triggers to act are presented within the model-based interface as recommendations based on known software testing software algorithms.

Automate software testing tasks 
When people speak of “automation” in software testing they are typically speaking about the task of automating test logic versus a UI or API.  Of course, the scope of tests that can be automated goes beyond the UI or API but also it is important to understand that the scope of what can be automated in the software testing life cycle (STLC) goes far beyond the test itself.  Automation patterns can be applied to:

  Requirements analysis
  Test planning
  Test data
  Environment provisioning
  Test prioritization
  Test execution
  Test execution analysis
  Test process optimization

Key business benefits of an Open Testing Platform
By automating or augmenting with automation functions within the software testing life cycle, an Open Testing Platform can provide significant business benefits to an organization. For example:

Accelerating testing will improve release cycles
Bringing together data that had previously been siloed allows more complete insight
Increasing the speed and consistency of test execution builds trust in the process
Identifying issues early improves capacity
Automating repetitive tasks allows teams to focus on higher-value optimization
Eliminating mundane work enables humans to focus on higher-order problems, yielding greater productivity and better morale

Software testing tools have evolved to deliver dependable “raw automation.” Meaning that the ability to steer an application automatically is sustainable with either open-source or commercial tools.  If you look across published industry research, you will find that software testing organizations report test automation rates to be (on average) 30%.  These same organizations also report that automated test execution is (on average) 16%.  This gap between the creation of an automated test and the ability to execute it automatically lies in the many manual tasks required to run the test. Software testing will always be a delay in the release process if organizations cannot close this gap.  

Automation is not as easy as applying automated techniques for each of the software testing life cycle sub-processes.  There are really three core challenges that need to be addressed:

Testers need to be informed about changes that impact testing efforts.  The requires interrogating the array of  monitoring and infrastructure tools and curating data that impacts testing.
Testers need to be able to act on changes as fast as possible. This means that business rules will automatically augment the model that drives testing – allowing the team to test more effectively.
Testers need to be able to automate the sub-tasks that exist throughout the software testing lifecycle.  Automation must be flexible to accommodate each team need yet simple enough to make incremental changes as the environment and infrastructure shifts.

Software testing needs to begin its own digital transformation journey. Just as digital transformation initiatives are not tool initiatives, the transformation to sustainable continuous testing will require a shift in mindset.  This is not shift-left.  This is not shift-right. It is really the first step towards Software Quality Governance.  Organizations that leverage multiple open-source or proprietary testing tools must consider an Open Testing Platform to keep pace with Agile and enterprise DevOps initiatives.  

The post The Open Testing Platform appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

Industry Watch: Assessing a developer’s work, and worth

It’s a new year, and organizations around the world are giving developers goals for the new year and reviewing their past year’s efforts.

A question I often hear is, ‘How do you assess a developer’s work, and his/her worth to the organization?’

Some organizations still cling to the metric of lines of code produced by a developer, which — given the extra responsibilities of testing, ensuring security, adhering to policies and regulations, and more — might not be a fair valuation in today’s complex world. 

This method is entrenched in the finger-pointing of the past, which modern development organizations have largely eschewed as they look to create a blameless culture.

Forward-thinking companies will look at the role of the team around development, assembled with software engineers, testers, security experts and people from the business side, and look holistically at how that team performs.

“Line counts is a terrible metric, and I think we all agree on that,” said Chris Downard, vice president of engineering at Gigsmart, a website for hiring gig workers. “There are times … when it could be useful as an additional data point, but not necessarily for information.”

When you’re managing humans, he said, reducing every action to data points is not good. Time must be spent building context, as data can often misrepresent things. At Gigsmart, Downard said they don’t use sprints, instead taking what he called “an ongoing, non-stop kind of combat approach.” But they do use sprint reports, from metrics captured every two weeks, to communicate what happened in that time period.

He pointed out he knew what his team was doing between the sprint reports — they were working hard, pairing up, and he saw the number of merge requests going up. “But one of the normal indicators of productivity is, ‘are we moving things across the line to delivered,’ as points completed,” he said, and that number was going down. But based on their knowledge of the team and of the context of everything else going on, they discounted the number, knowing the team’s productivity was very, very high. “It’s just the way the ticketing shook out, producing a data point that was not necessarily indicative of what was accurate,” he noted.

As an organizational leader, Downard said, you need to think about the things you want the organization to produce, and then think about the measurements that will indicate that you’re having success or struggling. Different teams, of course, have different goals.

“If you’re running a DevOps team, you might care about time to resolution, and if you’re tracking the development portion of an IT department, it might be turnaround time for customizing reports and data stuff. You need to track the things that matter to your organization’s success. So for us, I track merge request counts for a week. And we don’t necessarily do anything with that data. It’s not a carrot-and-stick thing. It’s just, it gives me additional information. Kind of like a doctor would be diagnosing a patient.”

But data points often don’t align with assessing developer productivity because while much programming involves the logical reasoning side of the brain, it also involves the creative side. So for Downard, raw data points are “typically terrible. But what we do get is a lot of soft indicators. You get information out of standup updates of people communicating how they feel about what they’re doing. You get hard data points in the sense that you can see their commit activity, but you have to keep context.” As a leader, he said, you have to advocate for developers and translate what they’re running into, to every other organization around development.

Downard said Gigsmart uses Bushido, the samurai code of conduct that defines the values of how you should act and conduct yourself as an individual, as its organizational ethos. “Jason Waldrip, our CTO and I sat down and crafted it into a set of ideals to drive the organization, and I use that as the core for everything we do. So if I’m going to start tracking something, it has to map to some sort of value from there, because if I try to track things that don’t map well to those values, I can’t advocate for those values with the team. It’s not gonna stick, it’s going to become hollow.”

Data points, he said, are nothing more than signals to go look into something and start asking questions. “And it should always be exploratory, not accusatory. That’s important to us. 

The post Industry Watch: Assessing a developer’s work, and worth appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

TypeScript 4.2 beta now available

The beta release of TypeScript 4.2 is now available. There are a number of new features being added, and a number of breaking changes as well. 

In this version, rest elements can be used in more ways than before. Previously, they were only allowed at the last position of a tuple type, but they can now occur anywhere within a tuple. There is a restriction that rest elements can’t be followed by another optional element or rest element. 

The TypeScript 4.2 beta also features smarter internals. It now tracks how types were constructed and differentiates type aliases to instances of other aliases. 

Another new feature is that template literal expressions now have template literate types. Template literal types were introduced in 4.1 as a way to model specific patterns of strings, but there was inconsistency between types and expressions. Now, template string expressions will always start with template literal types. 

Other new features in 4.2 include:

Stricter checks for the “in” operator
Abstract construct signatures
–explainFiles that show why certain files are included in a program
Relaxed rules between optional properties and string index signatures
A new way to declare new functions and methods based on the call site

There are a number of breaking changes as well, such as template literal expressions now having template literal types, noImplicityAny errors apply to loose yield expressions, and more. 

“We’re excited to hear your thoughts on TypeScript 4.2! With the beta we’re still in relatively early stages, but we’re counting on your feedback to help make this an excellent release. So try it today, and let us know if you run into anything,” Daniel Rosenwasser, program manager of TypeScript wrote in a post

The post TypeScript 4.2 beta now available appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

GCC front-end for Rust gets new funding for its development efforts

Open Source Security, Inc. has announced new funding for the GCC front-end for Rust project. The funding will go towards full-time and public development efforts. 

GCC front-end for Rust is an open-source project designed to provide an alternative Rust compiler for GCC. “The origin of this project was a community effort several years ago where Rust was still at version 0.9; the language was subject to so much change that it became difficult for a community effort to play catch up. Now that the language is stable, it is an excellent time to create alternative compilers. The developers of the project are keen “Rustaceans” with a desire to give back to the Rust community and to learn what GCC is capable of when it comes to a modern language,” the team wrote on its GitHub page

Open Source Security, Inc. aims to address underfunded and understaffed attention to security in Linux. While the organization doesn’t expect Rust code to be included in the Linux kernel in the near future, it saw a security issue with a mixed Assembly/C/Rust execution environment as well as mixing different compilers with different implementations. “As the source of the GCC plugin infrastructure in the Linux kernel and nearly all of the GCC plugins adapted for inclusion in the upstream Linux kernel, we too immediately spotted the importance of this problem and set out to ensure both those plugins as well as the security features built-in to GCC itself are able to instrument code from all languages supported by the Linux kernel with compatible and consistent security properties,” Brad Spengler, president of Open Source Security, Inc., wrote in a post

As part of its efforts, Open Source Security Inc. brought on developer Philip Herron to work on the project full time with the help of Embecosm, a UK-based company involved with GCC/LLVM development. Embecosm is providing Herron’s employment as well as project management services for the project. 

“The project has attracted multiple contributors on GitHub over its time being purely community driven and we want to continue to create an inclusive environment to welcome everyone to learn and create their own mark on the compiler. This can be achieved by creating clear documentation on getting up and running and readable code and a clean review process. Leveraging docker we can automate publishing prebuilt images of the compiler allowing people to test the compiler without requiring a development environment for the compiler, such that people can report feedback easily into the GitHub issue system,” Herron wrote in a post

Open Source Security, Inc. also stated as part of their efforts to help the project remain vendor-neutral, it will not own any copyright code developed through its funding. All code will be GPLv3-licensed and copyright will be assigned to the Free Software Foundation. 

The post GCC front-end for Rust gets new funding for its development efforts appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

SD Times news digest: D language 2.095.0, Sider’s recommended coding guide for C/C++ analysis tool, and Apache weekly roundup

The latest release of the D programming language focuses on enhanced Objective-C support, adding the ability to declare Objective-C protocols, as well as improvements throughout the compiler, libraries, and tools, according to the developers behind the language in a post.

The DLang team fixed the issue in which deprecation messages reported the source deep within the library and now developers will get a template instantiation trace as is common when finding errors. Also, the compiler will now raise an error whenever there are multiple declarations with more than one definition. 

The latest release includes the new D reference compiler (DMD) as well as the beta release of the LLVM-based D compiler. 

Sider releases recommended coding guide for C/C++ analysis tool
Sider’s “Recommended Rules” coding guide contains essential rules that are currently applicable to the C/C++ analysis tool (cpplint) for Sider. 

“By applying these new rules, Sider’s automated code review generates suggestions of greater relevance and importance, lending itself to greater productivity for software development efforts,” Sider wrote in a post.

The coding guide will soon be applicable to more languages including Java, Ruby, JavaScript, and TypesScript with more to follow, according to Sider. 

Apache weekly roundup 
Last week at Apache saw the release of SkyWalking NodeJS v0.1.0, which includes a built-in http/https plugin, Express and Axios plugins, as well as the ability to initialize project core codes. 

Apache also released updates for its Big Data software including ShardingSphere ElasticJob UI 3.0.0-RC1, Rya 4.0.1. Apache Guacamole 1.3.0 adds features support for automatically prompting users for their remote desktop credentials, user group support for both CAS and OpenID, and Apache Tomcat Native 1.2.26 include bug fixes and Windows binaries built using 1.1.1.

Two security issues were found in Apache Flink: CVE-2020-17518 and CVE-2020-17519 both result in a directory traversal attack and remote file writing through the REST API.

The post SD Times news digest: D language 2.095.0, Sider’s recommended coding guide for C/C++ analysis tool, and Apache weekly roundup appeared first on SD Times.

Read more: sdtimes.com

Categories
Software

Lenovo unveils new smart glasses for the enterprise

Lenovo unveiled this week at CES new AR smart glasses designed to change the way employees interact with their workspaces whether they’re working remotely or from the office. The  company expects the ThinkReality A3 lightweight AR smart glasses to be available later this year.

“As increasingly distributed workforces and hybrid work models become the reality of a new normal, small and large businesses around the world are looking to adopt new technologies for smart collaboration, increased efficiency, and lower downtimes,” Lenovo wrote in its announcement.

RELATED CONTENT: WFH reveals an ‘I’ in team

The company will also provide a PC edition for virtual monitors. The ThinkReality A3 PC Edition enables users to see large monitors in their field of view and to use Windows software tools apps. The glasses tether can tether to a PC or certain Motorola smartphones via a USB-C cable, the company explained.

The glasses can also be used in more complex environments such as factory floors, labs, retail, and hospitality spaces with an industrial edition. Because the Industrial Edition is supported by the ThinkReality software platform, customers can build, deploy, and manage mixed reality applications on a global scale, according to Lenovo in a post. 

“The A3 is a next generation augmented reality solution – light, powerful and versatile. The smart glasses are part of a comprehensive integrated digital solution from Lenovo that includes the advanced AR device, ThinkReality software, and Motorola mobile phones. Whether working in virtual spaces or supporting remote assistance, the ThinkReality A3 enhances workers’ abilities to do more wherever they are,” said Jon Pershke, Lenovo’s vice president of strategy and emerging business at Intelligent Device Group.

The new solution is also part of Lenovo’s efforts to “accelerate adoption of the next generation of wearable computing” within the enterprise. 

In addition to the A3 smart glasses, Lenovo also produces the A6 headset as well as the mirage VR S3 for enterprises that want to take it a step further into total VR immersion for use cases such as soft-skill training. 

The post Lenovo unveils new smart glasses for the enterprise appeared first on SD Times.

Read more: sdtimes.com