Google celebrates World Password Day 2021 with hope for a passwordless future

Today is World Password Day 2021, and while companies are touting the best password management practices, Google is hoping someday we won’t have to worry about them at all. According to the company, even the strongest passwords can be compromised. 

“You may not realize it, but passwords are the single biggest threat to your online security – they’re easy to steal, they’re hard to remember, and managing them is tedious. Many people believe that a password should be as long and complicated as possible – but in many cases, this can actually increase the security risk,” Mark Risher, director of product management, identity and user security at Google, wrote in a blog post

Currently, the company provides two-step verification to confirm the identity of users. To expand this feature, the company plans to automatically start enrolling users if their accounts are appropriately configured, Risher explained. In addition, the company is working on advanced security features to make a multi-factor authentication experience that’s even more secure than passwords. 

“For example, we’ve built our security keys directly into Android devices, and launched our Google Smart Lock app for iOS, so now people can use their phones as their secondary form of authentication,” Risher wrote. 

Until it can reach a point where passwords are no longer necessary, Google will continue to invest in tools and features that keep passwords and personal information safe. The company recently launched a password import feature that stores up to 1,000 passwords from third-party sites for free. 

“One day, we hope stolen passwords will be a thing of the past, because passwords will be a thing of the past, but until then Google will continue to keep you and your passwords safe,” wrote Richer.  

What others in the industry are saying about passwords: 

Ralph Pasini, president of Exabeam, explained World Password Day 2021 is more important than ever as organizations navigate to this new reality of working from home. “Cybercriminals will capitalize on any opportunity to collect credentials from unsuspecting victims. Just recently, scammers began preying on people eagerly awaiting vaccinations or plans to return to the office as a means to swipe their personal data and logins, for instance,” he said. “The most common attack technique that I often see in the breach reports that I read is stolen credentials. This is a never ending battle between the security industry and cybercriminals, but there are ways organizations can protect themselves against credential theft.”

Mathew Newfield, chief infrastructure and security officer at Unisys, believes there are two simple tips to create complex and sure passwords: 1. Use a private passphrase rather than a single word. 2. Create a password key. Together, passphrases can be transformed into complex passwords. And as a bonus tip, he recommends periodically changing your passwords.

Russell Reeder, CEO of cloud-based data protection company Infrascale, provided five tips to successfully secure your password: 

Be unpredictable: To minimize the risk of brute force and dictionary attacks 
Be creative: By creating a phrase, using special characters, and numbers. If you can’t think of a good password, Reeder suggested using a password generator.
Be long: The longer the password, the more possible combinations and permutations there are.
Be smart: Don’t share your credentials and be mindful of phishing. 
Be fresh: Update your passwords regularly. 

For enterprises, Brian Spanswick, chief information security officer at Cohesity, says to implement multi-factor authentication to protect against phishing schemes and password hacks. In addition, increase employee education and give them the tools they need to create more complex passwords and avoid phishing attempts. 

Lastly, Wes Spencer, CISO of Perch Security, said to never reuse a password or relay on your password online. “Most successful breaches occur when a stolen password from one platform is leveraged against another system that shares the same password.”

The post Google celebrates World Password Day 2021 with hope for a passwordless future appeared first on SD Times.

Read more:


Agile at 20: Where it’s been and where it’s going

It has been 20 years since the Manifesto for Agile Software Development was published, and even longer since the idea was first formed, and yet there still isn’t a clear understanding in the industry of what Agile really is. 

“Far too many teams that claim to be ‘Agile’ are not. I’ve had people — with a straight face — tell me they are ‘Agile’ because they do a few Scrum practices and use a ticketing tool. There is an awful lot of misunderstanding about Agile,” said Andy Hunt, one of the authors of the manifesto and co-author of the book “The Pragmatic Programmer.”

According to Dave Thomas, co-author of “The Pragmatic Programmer” and the Agile Manifesto, just the way Agile is used in conversations today is wrong. He explained Agile is an adjective, not a noun, and while the difference may be picky, it’s also really profound. “The whole essence of the manifesto is that everything changes, and change is inevitable. And yet, once you start talking about ‘Agile’ as a thing, then you’ve frozen it,” said Thomas.

However, Alistair Cockburn, a computer scientist and another co-author of the manifesto, believes that Agile being misunderstood is actually a good thing. “If you have a good idea, it will either get ignored or misinterpreted, misused, misrepresented, and misappropriated…The fact that people have misused the word Agile for me is a sign of success. It’s normal human behavior.”

What is Agile?

One thing that is missing from the Agile Manifesto is an actual definition of Agile. In one of Hunt’s books, “Practices of an Agile Developer,” he defined Agile development as an approach that “uses feedback to make constant adjustment in a highly collaborative environment.” 

“I think that’s pretty much spot on. It’s all about feedback and continuous adjustments. It’s not about standup meetings, or tickets or Kanban boards, or story points,” said Hunt. 

But Thomas believes there is a good reason a definition wasn’t included in the manifesto, and that’s because Agile is contextual. “Agile has to be personal to a particular team, in a particular environment, probably on a particular project because different projects will have different ways of working,” he noted. “You cannot go and buy a pound of Agile somewhere. It doesn’t exist, and neither can a team go and buy a two-day training course on how to be Agile.”

Thomas does note he doesn’t mind Hunt’s definition of Agile because you have to work at it. “None of this can be received knowledge. None of it can be defined because it’s all contextual. The way of expressing the values that we had was so powerful because it allowed it to be contextual,” he said. 

Dave West, CEO and product owner at, believes the real reason people don’t understand Agile is because of social systems, not the practice, the actual work or even the problems they are looking to solve. “Over and over again, we see this sort of pattern that agility is undermined not by the work, not even by the skills of the practitioners, but by the social systems and the context that those practitioners are working in…Bosses want to know when something is going to be done, but when you ask them what it is they want you to deliver, they can’t tell you that…but they want to know when it is going to be done,” he explained. 

If we really want to take the opportunity that Agile presented, we need to change the system agility runs within, according to West. For instance, he said while Fidelity was one of the first companies to ever do Scrum, they are still wrestling with the ideas around it today because they didn’t necessarily change the way they incentivize people.

It’s about the core principles

To get back to the true meaning of Agile, we need to get away from the terms and get back to the four core principles, according to Danny Presten, founder of the community.. Delivering incremental value, having a good look at the work, being able to prioritize and improve cycles is “what really makes Agile hum. It’s not the terms. The more people get focused on the principles and the less they are focused on the terms, the better Agile will be,” said Presten. 

A great starting point for teams that have only experienced waterfall or haven’t had as much success with software delivery is to start with Scrum, according to Hunt, but it should only be used as a starting point. “Modern Agile thought goes much further than Scrum, into true continuous development and delivery, committing directly to main many times a day… the goal has always been to shorten the feedback loops, to be able to change direction quickly, to leverage the team’s learning,” Hunt continued. 

Presten compared learning to be Agile with learning to play an instrument. “As you start out, you read the sheet music. It helps make momentum happen for you and gives clarity, but if it stops there and all we do is mindlessly read the sheet music and go through the motions, then there’s a problem,” he said. 

A good way to look at it is to look at how much feedback you are getting and when you are getting it, said Thomas. “The only way to be Agile is to be constantly adapting to your environment. Sometimes that can be minute by minute, sometimes it’s day by day and sometimes it’s week by week, but if you’re not getting feedback as often as you can, then you are not doing Agile,” he said. 

Cockburn explained there have been three waves of Agile. The first was at the team scale, then Agile started to move to the organization scale, and now we are in the third wave which is at a global scale. The global scale includes finance departments, HR departments, legal departments, entire supply chains, governments, social projects, distributed teams and even different geographies. “It’s not just teams. In fact, it’s not merely organizations. It’s not merely software. It’s not really products. It’s global adoption,” said Cockburn.

Cockburn went on to explain that the reason Agile is being looked at on a global scale is because of VUCA: volatility, uncertainty, complexity, and ambiguity. He said the world is “VUCA” and that became even more evident with COVID, the lockdowns and the distributed ways of working for every person, team, industry, company and event country. Everyone needs to have the ability to move and change direction quickly with ease, he said. 

“This is the new and current world. It is happening. Agile long ago stopped being only about software; it is now completely general. One can look at those values and principles and extrapolate them to any endeavor,” said Cockburn. 

Looking back at the manifesto

The Agile Manifesto was created to uncover these better ways of working, developing and delivering software. It includes four core values and 12 principles.

From Feb.11-13, 2001, 17 thought leaders met at the Snowbird ski lodge in Utah to try to find some common ground on software development. That common ground became known as the Manifesto for Agile Software Development. At the time, those 17 software developers had no idea what was to come from the industry or how Agile would even play out over the next 20 years.

“The Agile Manifesto fundamentally changed or incrementally changed how people approached work and focused on the customer,” said Dave West, CEO and product owner at “Twenty-five to thirty years ago, I worked for an insurance company in the city of London and we didn’t care about the insurance. We didn’t care about the customer. We just wrote code on the specs. The fact that today we have customer collaboration, the fact we now respond to change, all those behaviors have resulted in a lot of fabulous software.”

The world, however, is very different from when the Agile Manifesto was written that it has some wondering if it still is relevant in today’s modern, digital world. 

According to Robert Martin, one of the authors of the Agile Manifesto and author of “Clean Agile: Back to Basics,” the manifesto itself is just a marker in time. “It does not need any augmentation because it is not a living, evolving document. It is just something that was said 20 years ago. The truth of what was said in the document remains true today,” he said.

Fellow manifesto co-author Dave Thomas believes that the manifesto actually applies even more today as software is moving faster than ever, people are adapting to remote work, getting feedback and adjusting as they go. “It’s becoming clear you can’t plan a year out anymore. You are lucky if you can plan a month out, and so you are constantly going to be juggling and constantly going to be reprioritizing. The only way to do that is if you have the feedback in place already to tell you what the impact is going to be of this decision versus that decision,” said Thomas.

If they could go back…

If Thomas had a chance to go back in time and change anything about the manifesto, he said he would remove the 12 principles and just leave the four values in it because they dilute the manifesto and give an idea that there is a certain way to do Agile. “I would make the manifesto just that one page and then possibly just because it may not be obvious to people, explain why it doesn’t tell you what to do,” he said. 

Peter Morlion, a programmer dedicated to helping companies and individuals improve the quality of their code, believes the 12 Agile principles are still relevant today. “That’s because they’re based on economic reality and human nature, two things that don’t really change that much. If anything, some principles have become more radical than they were intended to be. For example, we should deploy weekly or even daily and we can now automate more than we imagined in 2001. On the other hand, some principles have been given a different meaning than we imagined in 2001: individuals no longer need to be in the same room for effective communication for example,” he recently wrote in a blog post

Because of Agile, we have been able to adapt to those principles, and while we can’t be face to face in the wake of the pandemic, we can do video calls because of the software that was influenced by the idea of Agile,’s Presten explained.  

If Presten were present at the Snowbird meeting back in 2001, he said he would probably give a hat tip to what outcomes can be expected from Agile, so that those principles can be mapped back to those outcomes to help people understand the what and why of Agile. “I am finding a lot more success and getting value from Agile by setting organizational goals like ‘hey, we want to get better at predictability,’ and then taking steps to get better,” he said’s West, who was not one of the original authors of the manifesto, believes one thing the manifesto was very quiet on was how you measure success and feedback to inspect it, adapt it and improve it. There are a number of new initiatives coming out to provide organizations with better outcomes such as value stream management and BizOps. According to West, one thing these approaches and Agile all have in common is inspection and adaptation, and the idea of rapid feedback loops and observation. He thinks any of these approaches will help. If you are a software engineer, the Agile Manifesto may be better to look at. If you are on the business side of things, the BizOps Manifesto might be a better start, but ultimately he said to begin with the customer, the problem and the outcome you seek. 

Looking back at the manifesto, co-author Hunt said if he had a chance he would add a preface to it that explains Agile is not Scrum. “Scrum is a lightweight project management framework. Agile is a set of ideals that a method should support. They are not the same, and you could argue that Scrum is not even all that Agile; it’s more like a mini-waterfall. Twenty years ago maybe we could wait weeks for feedback. Today, typically, we cannot,” he said. 

Thomas would also add something about respecting individuals over respecting the rules in order to reflect that it is not the organization’s job to tell individuals how to behave; it’s their job. In retrospect, he also would have liked to have had a more diverse group of people involved in the manifesto. Cockburn, though, noted that if anything inside that room 20 years ago had been different, if anyone else would have been added, the outcome would have been completely different and it probably would have been more difficult to come to an agreement. 

What Cockburn would change about the manifesto is the wording of responding to change over following a plan. “The discussion we had was that the act of planning is useful. [When] the plan goes out of date, you have to respond to change. People, especially programmers, use it to mean I don’t have to make a plan. I don’t have to have a date. And that’s just flat incorrect. There’s no way to run a company if you don’t have dates, prices and budgets,” he said.’s Presten added: “I’m just so grateful for the founders, the folks in Snowbird and what they created. It really made the world a better place…It’s changing the world that we live in for the good, and then also the culture that it is creating at the companies we work at where decisions are getting decentralized. People are able to come in and grow and learn and fail fast to succeed, and having that safety net there has been a really cool thing, so I’m just super grateful for the founders and the work they did, kind of putting their neck on the line. I think we’ve all benefited from that.”

Technical Agile

While it is normal for ideas to get diluted over time, Robert Martin, one of the authors of the Agile Manifesto and author of “Clean Agile: Back to Basics,” believes that the meaning of Agile has become more than just diluted; it has lost its way. 

He explained that Agille was originally developed by programmers for programmers, but after a couple of years there was a shift to bring Agile to project management. 

“The influx of project managers into the Agile movement changed its emphasis rather dramatically. Instead of Agile being a small idea about getting small teams to do relatively small projects, it turned into a way to manage projects in some bold new way that people could not articulate,” Martin said. 

Martin explained that the original goal at the Snowbird meeting, where the Agile Manifesto originated, was to bridge a divide between business and technology, but the business side took over the Agile movement and disenfranchised the technical side. 

He said at one point the Agile Alliance tried to throw a technical Agile conference in addition to its annual Agile conference, which reinforced the idea that Agile fell off course. It was held twice — in 2016 and 2017 — and then discontinued.

“What we see today now is Agile is very popular on the project management side and not very popular on the technical programming side,” said Martin. “There are remnants of technical Agile such as Test-Driven Development and refactoring, but that’s prevalent in the technical community and not the Agile community.” 

“Does the Agile Manifesto help the project management side of things? Yes of course, because about half of Agile was about project management, but the other half — the technical side —  that part fled. And so the project management side of Agile is now lacking the technical side and in that sense, it has not been a good evolution from the early days of the manifesto till today. It has been a separation, not a unification. I’m still waiting for that unification,” he added.

He explained without that unification, there will be an increasing number of software catastrophes. “We’ve already seen quite a few and they have become fairly significant. We’ve had the software in cars lose control of the cars, kill dozens of people and injure hundreds of people. There have been a number of interesting lawsuits paid out because of that, just a software glitch has done that. We’ve heard trading companies lose half a billion dollars in 45 minutes because of software glitches. We’ve seen airplanes fall out of the sky, because of software that wasn’t working quite right and this kind of failure of the software industry is going to continue.” 

If the business side and technical side of software development cannot be united again, Martin predicts the government will eventually step in and do it for us. 

“We cannot have programmers out there without some kind of technological disciplines that govern the way they work, and that’s what Agile was supposed to be. It was supposed to be this kind of governance umbrella over both project management and technology, and that split. Now many [technologists] are free to do what they want without any kind of discipline,” said Martin.

“My hope is that we could beat the government there and that we can get these two back together before the government acts and starts legislating and regulating because I don’t trust them to do it well. ” Martin added.

The post Agile at 20: Where it’s been and where it’s going appeared first on SD Times.

Read more:


SD Times news digest: Mirantis updates Kubernetes IDE, Perforce launches Android and iOS virtual devices, and SmartBear adds testing support for Apache Kafka

Mirantis announced a new version of Lens, a Kubernetes IDE that eliminates the need for accessing Kubernetes clusters while also providing a unique way to access clusters, services, tools, pipelines and automations through a new catalog system. 

The 5.0 version now includes Lens Spaces, which integrates with the Lens IDE and lets developers create collaborative spaces for all of their cloud-native development needs. 

Lens 5 also includes Hotbar, a new function that allows users to build their own workflows and automations within the desktop application. 

Additional details on Lens 5 are available here.

Perforce launches Android and iOS virtual devices

Perforce announced the availability of Android emulators and iOS simulators as part of the device lab available in Perfecto’s Intelligent Test Automation platform. 

With these new virtual devices, mobile app developers and testers can perform manual and Appium-based test automation in parallel, in the cloud and across different geographies, according to the company. 

“By strategically testing on virtual devices, organizations give developers faster feedback, so they can catch and fix issues quickly without interrupting their workflows,” said Eran Kinsbruner, a DevOps evangelist at Perforce. “Virtual devices complement real device testing later in the cycle to deliver a powerful, comprehensive combination for getting feedback quickly without compromising application quality.”

SmartBear testing now supports Apache Kafka

SmartBear’s ReadyAPI now supports API testing for real-time event-driven architecture with Apache Kafka event streaming services.

Apache Kafka enables teams to capture data in real-time from databases, sensors, mobile devices, cloud services and software applications in a scalable, reliable and secure way.

“Organizations are shifting from centralized, complex data lakes to a renewed focus on the data pipeline – or data in-flight. Event-driven architectures and services are a key enabler of this shift. The challenge now is ensuring all processes and data flow behave as designed,” said Alianna Inzana, the senior director of product management at SmartBear. “With ReadyAPI’s testing support for Kafka, organizations can now deliver quality at speed into their event-driven architecture, rolling out higher quality applications faster.” 

The post SD Times news digest: Mirantis updates Kubernetes IDE, Perforce launches Android and iOS virtual devices, and SmartBear adds testing support for Apache Kafka appeared first on SD Times.

Read more:


SD Times Open-Source Project of the Week: Kubewarden

Kubewarden is a new open-source policy engine aiming to simplify the adoption of policy-as-code. It provides a set of Kubernetes Custom Resources that makes the enforcement of policies in a cluster easier.

According to Flavio Castelli, distinguished engineer at SUSE and contributor to the project, policies can be written in any programming language because Kubewarden uses WebAssembly. The policies are also portable binary artifacts, which means that a policy could be built on a macOS host and then deployed to a Kubernetes cluster made of x86_64 Linux nodes. 

It is also secure by default because of WebAssembly. All policies live in their own sandbox with no access to the host environment. The policy server receives requests coming from Kubernetes and then evaluates them based on relevant policies,” Castelli explained in a blog post.  

Policies can be pushed or pulled to and from container registries as OCI artifacts. This helps increase flexibility of the registries because they can store other types of artifacts other than regular container images. Other companies like Amazon, Microsoft, Google, and GitHub already offer this capability in their registries, according to Castelli.  

“As I learned, the biggest obstacle for a policy author is the steep learning curve needed to write policies. It takes time to become comfortable with the coding paradigms that existing solutions impose — especially because these paradigms are different from what developers are used to. Wouldn’t it be great to be able to reuse existing knowledge? If only there was a way to write policy as code using a programming language of your choice. If that was possible, suddenly teams who want to write policies as code would be able to tap into their existing skills and significantly reduce the barrier to entry. These and more are the questions that lead to the creation of the Kubewarden project,” Castelli wrote in a post

The post SD Times Open-Source Project of the Week: Kubewarden appeared first on SD Times.

Read more:


Guest View: Use hackathons to validate your product

You think you have a great product. Your product manager thinks you have a great product. Your developers think they have created a great product. The question is – how do you prove this before you send it out to your alpha and beta testers for real-world feedback? 

Therefore, we recommend the multistage hackathon approach to ensure product-market fit and usability. With multistage hackathons, you can start them earlier than the “final product” stage to get more useful feedback. While the “final product” stage is not as well defined in our days of agile development and CI/CD, we’re defining “final product” as something that is generally agreed upon to be ready for market launch.

RELATED CONTENT: How to coordinate an exciting and productive hackathon

Nevertheless, using a series of hackathons can make it easier to verify that you are solving the customers’ problem that you intended to solve. What you think you accomplished in the lab isn’t always the case in the real world. Use hackathons to inject a bit of the “real world” in the development process.

You want to have at least three hackathons for three main reasons: 1) You won’t catch everyone in a given day. 2) You won’t catch everything in a given day. 3) You need time to iterate and incorporate feedback. 

Individual preparation

Hackathon #1 needs to focus on the use-case level. For example, you want someone to test a car by driving to a specific location. During hackathon #1, you give them GPS and detailed instructions.

For Hackathon #2, the task is the same, but instead of GPS and instructions, you give them a road atlas and some verbal directions. Hackathon #2 is more of a guided, end-to-end test.

Hackathon #3 is a true, open-ended usability test. Hand them the car keys and tell them to get to the destination. The goal of hackathon #3 is to determine whether, without any specific guidance, the user can easily achieve the objective using the product. This allows them to spend more time exploring and comprehensively stress-testing the application.

Tasks for all hackathons

The hackathon management team needs to have real-time visibility into what people are doing – either recording the sessions or via “feet on the ground” – when in-person hackathons come back. The managers should anticipate and prepare for questions related to the hackathon tasks but should also “hold back” guidance to make sure they don’t interfere with the process they are trying to test.

For all hackathons, prepare a way to measure results. Results come in two flavors, supervised and unsupervised metrics. Unsupervised metrics include basic system metrics, such as request latency, error rates, etc. Supervised metrics include data collected from the participants as well as more qualitative feedback, such as time to complete each step, individual videos of use-case execution, comments, complaints, and exit interviews.

Hackathon #1

The first hackathon should be small. Consider hackathon #1 to be your initial product focus group. You may have the most amazing back-end technology, but that’s pretty useless if no one can leverage it. The focus of the first hackathon should be usability.

The task should provide a “sample” of what the participants should expect to accomplish at the end. Can they get there? Is the product easy to use? Difficult? Was a user able to achieve what the UX manager set out to do?

Hackathon #2

The second hackathon needs to consist of a large crowd, the bigger the better. Again, make it simple by asking them to accomplish a specific task, but more complex than the first one. One goal of the second hackathon is to test performance. If it slows down when only an internal group is using it, degradation of performance will be an even greater issue when it’s being used by the “general” public.

Hackathon #3

Outcomes are tested during hackathon #3. Instead of assigning a single task, the hackathon manager needs to provide a series of objectives, without going into detail what the end products should look like. The results then need to be examined to make sure the teams could accomplish the individual objectives.

Post-hackathon analyses

While the hackathon easily allowed for supervised metrics, the real metrics come after the hackathon is over.

How useful is the product for the long-term? While some software is completely unique, with no other options on the market, most applications have alternatives. Once the hackathon is over, the product development team needs to track usage. Did the participants continue to use the product once the hackathon is over? Is it delivering results for them? Or did they use it for the hackathon and never log in again?

Each member of the development team is working on a specific task, in a silo, during the product development process. With the hackathon, they get the opportunity to see what their peers have accomplished and get introduced to the big picture, the full end result of their work.

While the hackathons help drive success for individual products, having hackathons as a regular part of the product stress testing reinforces the big picture to the entire team.

The post Guest View: Use hackathons to validate your product appeared first on SD Times.

Read more:


SD Times news digest: Visual Studio 1.56 released, Contrast Security adds Go support, and SmartBear supports Simulink for peer code review

The April 2021 release of Visual Studio Code includes improved hover feedback to help users quickly find clickable editor actions, terminal profile improvements, and debugger inline values. 

Developers can also now temporarily toggle the line numbers of a cell in the current sessions from the cell toolbar or change the visibility of line numbers for all notebooks through the ‘notebook.lineNumbers’ setting.

The team explained the release continues to improve its support for the upcoming TypeScript 4.3 release and Microsoft is also previewing Remote Repositories (RemoteHub), which enables developers to instantly browse, search, edit and commit to any GitHub repository directly from within VS Code.

Additional details on all of the new features available in Visual Studio 1.56 are available here.

Contrast Security adds Go support

Contrast Security announced the addition of the Contrast Go agent to its Contrast Application Security Platform, which is particularly useful for organizations that want to secure APIs. 

The Contrast Go agent performs software composition analysis to locate known vulnerabilities while using integrated analysis to detect unknown vulnerabilities. 

 “Contrast eliminates false-positive security alerts that plague legacy application security approaches. These inundate security teams with alerts that pose no risk and bog down development release cycles. For applications in Go, a better alternative did not exist until now. The Contrast Go agent detects only those vulnerabilities that matter while making it simple and fast for developers to remediate vulnerabilities on their own,” said Steve Wilson, the chief product officer at Contrast Security. 

SmartBear supports Simulink for peer code review

SmartBear extended its peer code and document review capabilities to support Simulink models with the new module, Collaborator. 

“Simulink models can be complex with multiple layers, and until now, many users did not have an easy way to effectively peer review and document their findings,” said Brian Downey, the senior vice president of product at SmartBear. “This is a natural progression for Collaborator, extending beyond traditional code and document review, to unify and support other engineering disciplines and artifacts in a single, enhanced peer review tool.”

Collaborator’s new Simulink model also ensures a defined peer review process in which all team members are included and the correct data can be captured in one place. 

Sentry tackles developer workflow and productivity 

Sentry expanded its platform to include new features that help developers cut the time to solve critical code-level issues by making it easier to find and resolve issues. 

“Our focus has always been on delivering solutions that help all developers get to the root cause of issues with unparalleled depth, so developers can solve, not just quickly, but also comprehensively,” said Milin Desai, CEO, Sentry. “These new features further advance that mission by expanding search, trace, and review capabilities for both error and performance monitoring to surface the most critical issues and trace them back to the problematic code.”

The new capabilities include a Review List, which offers Sentry users fast access to high-priority issues, expanded search and filtering capabilities, Quick Trace that serves as a mini map to easily navigate between errors across frontend and backend systems and more. 

The post SD Times news digest: Visual Studio 1.56 released, Contrast Security adds Go support, and SmartBear supports Simulink for peer code review appeared first on SD Times.

Read more:


Digital experience monitoring the key to supporting a distributed workforce

While making sure applications are up and running is important, it may be even more important to perform monitoring that is from the perspective of your users. After all, who cares if your APM data shows an application to be up and running if the user is experiencing an issue that’s gone undetected? This is where digital experience monitoring, or user experience monitoring, comes into play. 

“APM focuses on just collecting data from the application. It doesn’t collect data from the users. It doesn’t collect data from the network. And data from that interconnected digital chain, that needs to come together to deliver a great digital experience to customers and employees,” said Nik Koutsoukos, chief marketing officer at Catchpoint, a digital experience monitoring platform provider.

According to Koutsoukos, the goal of digital experience monitoring is to measure the “performance of applications and digital services from the vantage point of a digital user.”

He believes that any company delivering a digital service needs to be able to answer two questions: 1) Do I understand what my users are experiencing? 2) Do I have control of all of the services involved in delivering those experiences to my users?

In addition, companies need to be able to answer those questions quickly so they can resolve issues quickly. 

“Time is of the essence,” said Koutsoukos. “Consumers and employees and digital users nowadays don’t have the patience for poor service or an outage. Just wait milliseconds and people are moving onto the next competitor and they’re trying to find solutions themselves. The user experience stakes have gone incredibly high. You have to be able to respond very quickly to a problem. In fact, I would say it’s not a question of reacting quickly to a problem. You have to be able to identify a problem really before it impacts the user experience of a customer or an employee because by the time they see it, it’s too late and they’re moved on to some other competitor or solution. They’re not going to wait for you, so this is where your ability to collect data and act on the data proactively is super important.”

The three components of digital experience monitoring

According to Koutsoukos, digital experience monitoring can be further broken down into three categories: 

Real User Monitoring
Synthetic/Active Monitoring
Endpoint Monitoring

Real user monitoring is all about collecting input from the browser. 

Synthetic monitoring involves doing tests that allow you to determine what accessing a website or application would be like for an end user. For example, if you have an application that you want to deploy to China, but you don’t currently have users in China, you can simulate user transactions and test the performance before it goes live into production.

This involves using bots that behave like users that will test things like: “Can I access the application, is it up and running? Is the page rendering properly? And how is it performing in terms of response time, latency, and jitter?”

If there is a problem that gets identified, then the question becomes finding out what that problem is, Koutsoukos explained. 

“If I establish that users can’t get to my website from China, the question is what is causing that outage? Is it the application itself? Is it my CDN provider, is it a DNS problem? Is a broadband or backbone ISP down? Is it a network issue? So the question then becomes: Do you have the data from that digital chain that is interconnecting your application to your users so you have the data to point me to where the problem is.” This element of synthetic and active monitoring is also sometimes referred to as network monitoring, Koutsoukos explained. 

Finally, there is endpoint monitoring, which involves collecting data directly from a device. This is more common in the case of employees as end users, not customers, since companies don’t have a way of collecting data from their users’ devices, but may be able to monitor employee devices to gather metrics. 

After the data from these three components of digital experience monitoring is correlated and analyzed, it can then be used by the IT teams to help troubleshoot problems.

Core Web Vitals

The Core Web Vitals are also a crucial part of user experience monitoring. They were created as part of Google’s Web Vitals initiative, which aims to provide unified guidance on the metrics most important for delivering good user experiences. 

“Site owners should not have to be performance gurus in order to understand the quality of experience they are delivering to their users. The Web Vitals initiative aims to simplify the landscape, and help sites focus on the metrics that matter most, the Core Web Vitals,” the Web Vitals website states. 

The Core Web Vitals are a subset of Web Vitals and are focused on three aspects of user experience: loading, interacting, and visual stability. The three metrics that correspond to those focus areas are Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS). 

The vitals can be measured through a number of tools, including Chrome User Experience Report, PageSpeed Insights, and Search Console. 

Koutsoukos added: “Ultimately it’s meant to capture the quality of the experience that a user is having on the mobile device and on the desktop device.” 

In addition, Koutsoukos predicts that the Core Web Vitals will start to more heavily impact SEO. Google has already been using them when ranking websites in its search results, but Koutsoukos believes the Core Web Vitals will start to hold more rank. 

Digital experience monitoring’s role in the distributed workforce

Koutsoukos has observed that digital experience monitoring has become increasingly important in the past year than ever before because there are more digital end users than ever before. For example, there are people needing to order groceries online who have never done so, and millions of kids and teachers needing to conduct classrooms using technology. 

“Think about credit card processing systems and services. All of a sudden you saw a huge, huge spike in demand for what they were doing. The whole delivery system for groceries, local or global or more larger scale, had to sort of increase capacity to deal with the increased demand,” said Koutsoukos.

Even though states in the U.S. are starting to roll back restrictions to what they were pre-COVID-19 and the pace of vaccinations continues to rise, that doesn’t mean this digital demand is going to slow down any time soon. “[That digital demand is] going to continue being high,” said Koutsoukos. “In fact, in some cases it’s never going to go back to pre-covid levels.”

In addition, the mass influx of remote working exposed some of the weaknesses of the internet to handle demand shifting, he explained. Remote workers have to rely on their home networks rather than a business connection, which can be a challenge for IT teams who used to monitor network traffic as part of their digital experience monitoring. 

“All of a sudden the question that came into play is: is IT in a position to deliver a great service to their employees now that they are not in an office with an internet connection and are relying on home connections? That has ramifications on how you monitor the digital experience of employees, are you in a position to troubleshoot problems when they arrive, and do you have the ability to do that,” said Koutsoukos.

According to Koutsoukos, this is where endpoint monitoring comes into play. When an employee was in an office it wasn’t necessary to monitor endpoints because the end user was in reach of the IT team. 

“They’re remote and you just don’t have a clue of what experience they’re having on their PC. The ability to reach from an endpoint all the way to the employees has become very much needed,” said Koutsoukos. 

Predicting user intent is the future of digital experience monitoring

Search company Algolia believes that digital experience monitoring will evolve to be able to predict a visitor’s intents. 

Understanding why a user is there and what they want to achieve would enable sites and applications to surface relevant search results, recommendations, offers, and in-app notifications. It could also provide a site navigation that is completely customized to a particular user. 

“There has been a fundamental shift in how companies earn trust online, and no matter the industry, it’s driven by an increasing sense of consumer urgency. As we head toward a cookieless world where data privacy is much more stringent, organizations must cease reliance on external data sources, or their business will suffer,” said Bernadette Nixon, CEO of Algolia. “Immediately gathering, utilizing, and protecting first-party data is mission-critical for every brand. However, companies no longer have minutes to spare when delivering what a customer is looking for — they must show results instantly or suffer the consequences of their customers bouncing to competitor’s sites. That is a big part of Algolia’s larger vision.”


The post Digital experience monitoring the key to supporting a distributed workforce appeared first on SD Times.

Read more:


How does your company help its customers with digital experience monitoring

Nik Koutsoukos, vice president of product marketing at Catchpoint explained:

In a digital economy enabled by cloud, SaaS, and IoT, applications and users are many and can be located anywhere. Catchpoint is the only Digital Experience Observability platform that can scale and support today’s customer and employee location diversity and application distribution.

We enable enterprises to proactively detect, identify, and validate user and application reachability, availability, performance, and reliability, across an increasingly complex digital delivery chain. Industry leaders like Google, L’Oréal, Verizon, Oracle, LinkedIn, Honeywell, and Priceline trust Catchpoint’s out-of-the box monitoring platform to proactively detect, repair, and optimize customer and employee experiences. 

RELATED CONTENT: Digital experience monitoring the key to supporting a distributed workforce

Our platform consists of four key components that empower you to take your digital monitoring initiatives to the next level:

Proactive, True Synthetic Monitoring: Leverages the largest public global network in the industry and the ability to collect active data from anywhere within the enterprise network and datacenter so you can provide a top-notch user experience.
Real User Monitoring: Provides a complimentary view of your users’ actual experience. Our RUM solution helps you swiftly resolve performance issues, optimize conversions, and make better and more profitable business decisions.
Network Monitoring: Proactively detects and resolves issues throughout your entire network – from layer 3 to layer 7 – to lower MTTR and improve end users’ digital experiences.
Endpoint Monitoring: Unleashes the power of your digital workplace so you can see exactly what your employees see on their screen. Isolate the cause of delays to the device, network, or application to quickly identify and fix user-impacting issues.

The post How does your company help its customers with digital experience monitoring appeared first on SD Times.

Read more:


SD Times Open-Source Project of the Week: OSAS

One-Stop Anomaly Shop (OSAS) is a new open-source project from Adobe Security. OSAS is a security intelligence toolset for detecting anomalies.

Researchers can use OSAS to experiment with data sets, control how they are processed, and shorten the path to finding a solution for detecting security threats. 

“Logs are not always straightforward. Security-related logs are even more heterogenous and verbose, often presenting a large feature-space due to the unbound nature of attribute values. Often when using machine learning (ML) algorithms and models this large feature-space can create an adverse effect known as data sparsity. This means that most supervised and unsupervised ML algorithms will struggle to find structure within the data and are likely to overfit and handle previously unseen examples poorly,” Chris Parkerson, marketing lead for the Adobe Corporate Security Team, wrote in a post

OSAS uses a two-step approach to data processing that reduces that effect. First it consumes data and labels it using standard recipes for field types. Then, it uses those labels as input features for machine learning algorithms. 

This automatic learning and tagging enables the tool to be used across a wide range of datasets and project, the company explained.

According to Adobe Security, it incorporates a lot of previous Adobe Security Intelligence Team research, white papers, and other open-source projects. 

The post SD Times Open-Source Project of the Week: OSAS appeared first on SD Times.

Read more:


Mozilla’s Pyodide becomes an independent and community-driven project

Pyodide, Mozilla’s open-source project for running Python inside a web browser, has become an independent and community-driven project with a new home on GitHub. The company also announced the new 0.17 release as part of its announcement. 

The project aims to bring the Python runtime to the browser via WebAssembly along with NumPy, Pandas, Matplotlib, parts of SciPy and NetworkX. 

According to Mozilla, Pyodide contains the CPython 3.8 interpreter compiled to WebAssembly, which allows Python to run in the browser and it can install any Python package with a pure Python wheel from the Python Package Index (PyPi).

RELATED CONTENT: Python named TIOBE’s programming language of 2020

The new version contains major maintenance improvements, a large redesign of the central APIs and careful elimination of error and memory leaks. 

The type translation module was significantly reworked so that the round trip translations of objects between Python and JavaScript produce identical objects.

Previously, issues with round trip translations were caused by implicit change of Python types to Javascript, which surprised users. 

Another new feature is a Python event loop that schedules coroutines to run on the browser event loop which makes it possible to use asyncio in Pyodide. It’s also now possible to await JavaScript Promises in Python and await Python awaitables in JavaScript.

Error handling was also improved so that errors could be thrown in Python and caught in JavaScript and vice-versa. The error translation code is generated by C macros which simplifies implementing and debugging new logic. 

The latest release also completes the migration to the latest version of Emscripten that uses the upstream LLVM backend, reducing significant reductions, package size and execution time. 

Pyodide was originally developed inside Mozilla to allow the use of Python in Iodide to build an interactive scientific computing environment for the web. 

Moving forward, the developers behind Pyodide are focusing on reducing download sizes and initialization times, improving the performance of Python code in Pyodide and simplifying the package loading system. 

The post Mozilla’s Pyodide becomes an independent and community-driven project appeared first on SD Times.

Read more: