SD Times news digest: Docker Desktop available for Apple Silicon, GitHub Actions with GitHub CLI, and new Harness integrations

Docker Desktop is now supported for all devices using the Apple Silicon.

Users can build and run images for both x86 and ARM architectures without having to set up a complex cross-compilation development environment.

Docker Hub also makes it easy to identify and share repositories that provide multi-platform images. 

Additional details are available here.

GitHub Actions with GitHub CLI 
GitHub Actions are now available in a developer’s terminal with GitHub CLI, giving users insight into their workflow runs and files from the comfort of their local terminal with two new top-level commands, ‘gh run’ and ‘gh workflow.’

With the new ‘gh run list,’ users also receive an overview of all types of workflow runs whether they were triggered via a push, pull request, webhook or manual event.

‘Gh run watch’ also helps users stay on top of in-progress workflow runs, which can be used to either follow along as a workflow run executes or combine it with other tools that send alerts when a run is finished. 

Harness announces new integrations 
Harness announced new integrations with AWS GovCloud, Azure and GCP that provide DevOps and financial management teams with a CI/CD platform for multi-cloud deployments and enhanced cost visibility.

“As more enterprises look to do more with their software and reduce infrastructure costs, they’re turning to multi-cloud architectures to improve uptime, avoid vendor lock-in and gain price leverage,” said Jyoti Bansal, CEO and co-founder, Harness. “With these integrations, Harness is answering that call, providing an abstraction layer between cloud deployment complexity and developers, so every company can deliver next-generation software faster than ever.”

Harness gives customers access to all major public clouds directly and provides the same  deployment and cost management experiences that users have come to expect from managing applications hosted on Kubernetes, according to the company in a post.

Windows Terminal Preview 1.8 
With the new preview version, the settings UI now ships inside the Windows Terminal Stable build. 

The settings UI also has a new font face dropdown option, new command line arguments and the base layer has been removed.

Developers now have the ability to name their terminal windows, which makes it easier to identify windows when using wt CLI arguments. 

Additional details on the updated version are available here.

The post SD Times news digest: Docker Desktop available for Apple Silicon, GitHub Actions with GitHub CLI, and new Harness integrations appeared first on SD Times.

Read more:


Observability: A process change, not a set of tools

If you do a Google search for the phrase “observability tools,” it’ll return about 3.3 million results. As observability is the hot thing right now, every vendor is trying to get aboard the observability train. But observability is not as simple as buying a tool; it’s more of a process change — a way of collecting data and using that data to provide better customer experiences. 

“Right now there’s a lot of buzz around observability, observability tools, but it’s not just the tool,” said Mehdi Daoudi, CEO of digital experience monitoring platform Catchpoint. “That’s the key message. It’s really about how can we combine all of these data streams to try to paint a picture.”

Observability: It’s all about the data
A guide to observability

If you go back to where observability came from — like many other processes, it originated at Google — its original definition was about measuring “how well internal states of a system can be inferred from knowledge of its external outputs,” said Daoudi. 

Daoudi shared an example of observability in action where one of Catchpoint’s customers was seeing a trend where customers complained a lot on Mondays and Tuesdays, but not on Sundays. The server load was the same, but the services were slower. Through observability, the company was able to determine that backup processes that only run on weekdays were the culprit and were impacting performance. 

“Observability is about triangulation,” said Daoudi. “It’s about being able to answer a very, very complex question, very, very quickly. There is a problem – where is the problem? The reason why this is important is because things have gotten a lot more complex. You’re not dealing with one server anymore, you’re dealing with hundreds of thousands of servers, cloud, CDNs, a lot of moving parts where each one of them can break. And so not having observability into the state of those systems, that makes your triangulation efforts a lot harder, and therefore longer, and therefore has an impact on the end users and your brand and revenue, etc.”

This is why Daoudi firmly believes that observability isn’t just a set of tools. He sees it as a way of working as a company, being aligned, and being able to have a common way to collect data that is needed to answer questions. 

The industry has standardardized on OpenTelemetry as the common way of collecting telemetry data. OpenTelemetry is an open source tool used for gathering metrics, logs, and traces — often referred to as the three pillars of observability. 

The three pillars are often referenced in the industry when talking about observability, but Ben Sigelman, CEO and co-founder of monitoring company Lightstep, believes that observability needs to go beyond metrics, logs, and traces. He compared the three pillars to Steve Jobs announcing the first iPhone back in 2007. Jobs started off the presentation by announcing a widescreen iPod with touch controls, a “revolutionary” mobile phone, and a breakthrough internet communications device, making it seem as though they were three separate devices. 

“These are not three separate devices,” Jobs went on to clarify. “This is one device, and we are calling it iPhone.”  Sigelman said the same is true of telemetry. Metrics, logs, and traces shouldn’t be known as the three pillars because you get all three at once and it’s one thing: telemetry.

Michael Fisher, group product manager at AIOps company OpsRamp, broke observability data down further into two signals: symptomatic signals and causal signals. Symptomatic signals are what an end user is experiencing, such as page latency or a 500 Internal Server Error on a website. Causal signals are what cause those symptomatic signals. Examples include CPU, network, and storage metrics, and “things that may be an issue, but you’re not sure because they’re not being tied to any symptom that an end user might be facing.” 

Monitoring tools tend to focus mostly on the causal signals, Fisher explained, but he recommends starting with symptomatic signals and working towards causal signals, with the end state being a unit of the two. 

“When something is going wrong [the developer] can search that log, they can search that trace and they can tie it back to the piece of code that’s having an issue,” said Fisher. “The operations team, they may just see the causal symptoms, or maybe there is no causal symptom. Maybe the application is running fine but users are still complaining. Tying those two together is kind of a key part of this shift towards observability. And that’s why I talk about observability as a development principle because I think starting with the symptomatic signals with the people who actually know is a huge paradigm shift for me because I think some of the people you talk to or ITOps teams you talk to is that monitoring is their wheelhouse, whereas many modern shops, OpsRamp included, much more monitoring actually happens on the development team side now.”

Providing good end user experience is the ultimate goal of observability. With monitoring, you might only be focusing on those causal signals, which might mean you miss out on important symptomatic signals where the end user is experiencing some sort of service degradation or trouble accessing your application. 

“When I talk about using observability to drive end-user outcomes, I’m really talking about focusing on observing the things that would impact end users and taking action on them before they do because traditionally this focus on monitoring has been at a much lower level, layer 3, I care about my network, I care about my switches,” said Fisher. “I’ve talked to customers where that’s all they care about, which is fine but you start to realize those things really matter less once you move up the stack and you have a webpage or you have a SaaS application. The end user will never tell you that their CPU is high, but they will tell you that your webpage is taking 10 seconds to load and they couldn’t use your tool. If an end user can’t use your tool who gives a damn about anything else?”

It’s important that observability not just stay in the hands of developers. In fact, Bernd Greifeneder, CTO of monitoring company Dynatrace, believes that if developers just do observability on their own, then it’s nothing more than a debugging tool. “The reason then for DevOps and SREs needs to come into play is to help with a more consistent approach because these days multiple teams create different microservices that are interconnected and have to interplay. This is sort of a complexity challenge and also a scale challenge that needs to be solved. This is where an SRE and Ops team have to help with standing up proper observability tooling or monitoring if you will, but making sure that all the observability data comes together in a holistic view,” he said. 

SRE and Ops teams can help make sure that the observability data that the developers are collecting has the proper analytics on top of it. This will enable them to gain insights from observability data and use those insights to drive automation and further investments into observability. “IT automation means higher availability, it means automatic remediation when services fail, and ultimately means better experiences for customers,” Greifeneder said. 

When looking into the tools to put on top of your observability data to do those analytics, Tyler McMullen, CTO of edge cloud platform Fastly recommends constantly experimenting to see what works for your team. He explained that often these observability vendors charge a lot of money, and teams might fall into the trap of buying a solution, putting too much observability data into it, and being shocked when they’re charged a lot of money to do so. 

“Are the pieces of information that we’re plugging into our observability, are they actually working for us? If they’re not working for us, we definitely shouldn’t have them in there,” said McMullen. “On the other hand, you only really find out whether or not something is useful after it becomes useful. Figuring out what you need in advance is I think, one of the biggest problems with this thing. You don’t want to put too much in. On the other hand, if you put too little in you don’t know whether or not it is useful.” As a result, your team will need to do lots of experimenting to discover the right process and the right balance. 

Daoudi added that it’s also important to answer the question of why you’re doing observability before looking into products. “Like every new thing that when a company goes and decides to implement something, you start with why? Why do you need to implement observability? Why do you need to implement SREs? Why do you need to implement an HR system? If you don’t define the ‘why’ then what typically happens is first it’s a huge distraction to your company and also a lot of resources being wasted and then the end result might not be what you’re looking for,” he said.  

And of course, it’s important to remember that observability is more of a process, so looking for a tool that will do observability for you won’t work. The tooling is really about analytics on the observability data you’ve gathered. 

“I really don’t think observability is a tool,” said Daoudi. “If there was such a thing as go to Best Buy, aisle 5, or Target, or Walmart and buy an observability tool for like $5 million, it ain’t going to work because if your company is not functioning and aligned, and your processes and everything isn’t aligned around what observability is supposed to do, then you’re just going to have shelfware in your company.”

The post Observability: A process change, not a set of tools appeared first on SD Times.

Read more:


Mobile security lessons learned from mobile game cheats

Mobile games are often broken into so users can access premium content, paid features and obtain in-game currency. This is done by tampering with memory, bypassing payments and touchID screens, and downloading paid apps for free — and can be done on both jailbroken or non-jailbroken devices. 

In a recent webinar on SD Times, Jan Seredynski, mobile security researcher and pentester at the mobile application protection company Guardsquare, walked attendees through these game cheats and provided four simple tips on how to prevent them. According to Seredynski, these lessons learned from mobile game cheats can be applied to all aspects of mobile application security from healthcare, e-commerce, banking and more.

Seredynski’s four simple tips are: 


Environment integrity: Detecting a compromised environment for example a jailbroken/rooted device, emulated app or system or the presence of a debugger.
Application integrity: Verifying that the user is running the current version of the application, that application resources haven’t been changed, and that the application has been installed from a legitimate source
Code integrity: Verifying if execution code is identical to the developer code. For instance, ensuring that machine code or Java instructions haven’t been changed. 
Obfuscation: Making it harder for attackers to understand your code by renaming variables, methods or classes; encrypting sensitive strings; complication control-flow; and encryption assets. 




Each of these protection components matter, Seredynski explained. If you don’t have environment integrity, a hacker can bypass the application integrity; if you don’t have application integrity, a hacker can imitate the compromised environment; without code integrity, a hacker can overwrite protection code; and without obfuscation, a hacker can easily find relevant functions.  

Some other tips Seredynski suggested are to regularly update your protection code so that hackers don’t have enough time to understand those protections, make sure protections work on all operating systems and versions, and check for false positives. 

To learn more and see Seredynski’s do-it-yourself steps on how you can protect your application through each of the four components, watch the full webinar here

The post Mobile security lessons learned from mobile game cheats appeared first on SD Times.

Read more:


SD Times news digest: RubyMine 2021.1 released, ShiftLeft CORE, and GrammaTech CodeSonar update

JetBrians announced the latest release of its Ruby on Rails IDE. RubyMine 2021.1 now supports RBS and uses .rbs files in its code insight for improved code completion capabilities. 

Users can also now connect RubyMine to the organization in Space where they can view and clone project repositories, review teammates’ code and write Space Automation scripts. 

Additional details on all of the UI and UX improvements and new features for working with web technologies and frameworks, version control systems and JSON are available here.

ShiftLeft CORE announced
ShiftLeft CORE is a new unified code security platform powered by ShiftLeft’s Code Property Graph (CPG). 

The platform contains NextGen Static Analysis (NG SAST), a modern code analysis solution for developers to find and fix vulnerabilities in their IDE, as well as ShiftLeft’s Intelligent SCA and ShiftLeft Educate, which offers context-sensitive security training for developers. 

“Organizations today don’t have a problem finding vulnerabilities; the challenge is prioritizing and fixing the ones they already have without sacrificing speed in the development process,” said Chetan Conikee, CTO, ShiftLeft. “The groundbreaking features we’re offering in the ShiftLeft CORE platform are designed to address this new dynamic, and turn application security into a business advantage for our customers.”

GrammaTech CodeSonar update
GrammaTech CodeSonar 6.0 provides a deeper integration of SAST within DevOps pipelines.

The new version features an integrated visual representation of selected code for improved remediation of defects that eliminates the need for a developer interface and built in detection, alerts and reporting of Top 10 OWASP risks

It also includes GitLab integration and additional language and compiler support requested by 500 plus GrammaTech customers to support their transition to DevSecOps practices. 

Postman announces unlimited API collaboration for up to three team members
Teams of up to three members can now have unlimited shared workspaces and unlimited shared requests at no cost. 

“Postman is committed to helping everyone work with APIs more easily, and this new enhancement is a key part of that effort, eliminating a cost barrier and enabling small teams to take full advantage of the platform’s API collaboration capabilities,” Postman wrote in a post.

Additional collaboration details for each Postman plan are available here.

The post SD Times news digest: RubyMine 2021.1 released, ShiftLeft CORE, and GrammaTech CodeSonar update appeared first on SD Times.

Read more:


Checkov 2.0 now available with new Dockerfile scanner

Bridgecrew has announced the release of Checkov 2.0. Checkov is a static code analysis tool specifically designed for Infrastructure as Code (IaC). 

“Policies that take into account interdependencies within IaC are critical to understanding the impact of misconfigurations,” said Rob Eden, senior engineer and Checkov contributor. “It’s not enough to know that a security group has ports open to the world; we need to know if that misconfiguration is in production or just a test environment in order to prioritize it appropriately. It’s awesome to have an open-source tool providing that level of context.”

Key additions in Checkov 2.0 include 250 new policies, Dockerfile scanning to secure container build tasks, and graph-based mapping. 

RELATED CONTENT: 5 ways static code analysis can save you

Checkov first launched in 2019, and since then has helped developers identify misconfigurations in their IaC frameworks like Terraform, CloudFormation, Kubernetes, Azure Resource Manager (ARM), and Serverless Framework. 

“This release is the most significant update to Checkov since it launched early last year,” said Matt Johnson, developer advocate at Bridgecrew. “Dependency awareness means developers have even more context earlier in the development lifecycle, helping companies around the world better secure their cloud infrastructure.”

The post Checkov 2.0 now available with new Dockerfile scanner appeared first on SD Times.

Read more:


SD Times Open-Source Project of the Week: C# standardization

Microsoft has announced C# standardization is now been open sourced. The work for C# standards will now happen in the open under the .NET Foundation while the ECMA C# standards committee, TC-49-TG2 is still responsible for creating the proposed standard for the C# language.

“Moving the standards work into the open, under the .NET Foundation, makes it easier for standardization work,” Bill Wagner, a principal content developer at Microsoft wrote in a blog post. “Everything from language innovation and feature design through implementation and on to standardization now takes place in the open. It will be easier to ask questions among the language design team, the compiler implementers, and the standards committee. Even better, those conversations will be public.”

Preceding the recent open sourcing of the standardization, the C# compilers have been open source since 2014 and are now in the dotnet/roslyn repository. After that the dotnet/csharplang split off to provide a dedicated space for innovation in the language. The recent announcement of dotnet/csharpstandard now completes the group and creates a third repository related to the C# language. 

The community can now see work in progress on the standard text for C#6 and C#7. Microsoft said that soon issues in dotnet/csharplang and dotnet/docs for the spec text will move to the new standards repository. 

Also, the C# spec on will be replaced with the version from the standards committee and the C#6 draft spec will be removed from the dotnet/csharplang repo.

The post SD Times Open-Source Project of the Week: C# standardization appeared first on SD Times.

Read more:


SD Times news digest: .NET 6 Preview 3, WhiteSource announces new funding for app security, and Canonical adds full enterprise support for Kubernetes 1.21

.NET 6 Preview 3 now available. The platform matrix of .NET 6 was significantly expanded as of the new preview with the addition of Android, iOS, Mac and Mac Catalyst for x64 and Apple Silicon and Windows Arm64. 

The CollectionsMarshal.GetValueRef API was added to make updating struct values in Dictionaries faster and is intended for high-performance scenarios. 

Preview 3 also contains changes that improve code generation in RyuJIT and Interface casting performance has been boosted by 16% – 38%.

Additional details on the new release are available here.

WhiteSource announces $75 million in new funding for app security
The open-source security and management company WhiteSource announced that it raised $75 million in Series D funding led by Pitango Growth, bringing the total funding to $121.2 million. 

“Application security needs have gone beyond just detection to include continuous prioritization and prevention, as demonstrated by recent software supply chain attacks,” said Rami Sass, Co-Founder and CEO of WhiteSource. “This investment brings us closer to creating a future where the cycle of application delivery is always a step ahead of any security risk, and where developers are easily equipped with code they can trust.”

WhiteSource provides its remediation-centric solution to more than 800 companies globally and helps organizations better protect their software applications without affecting the speed of software delivery or performance. 

Canonical announces full enterprise support for Kubernetes 1.21
Canonical announced full enterprise support for Kubernetes 1.21 from the cloud to edge. 

“Canonical Kubernetes is about removing complexity around Kubernetes operations from cloud to edge. We bring certified Kubernetes distributions to allow users to bootstrap their Kubernetes journey, as well as a large tooling ecosystem and automation framework combination, for businesses to reap the K8s benefits and focus on innovation in the growing cloud-native landscape. Our users benefit from the latest features of Kubernetes, as soon as they become available upstream,” said Alex Chalkias, product manager for Kubernetes at Canonical.

Canonical also said that it commits to supporting N-2 releases as well as providing extended security maintenance (ESM) and patching for N-4 releases in the stable release channel.

Additional details are available here.

Gitpod announces new funding
Gitpod announced that it raised $13 million in new funding, as well as new product features and the first DevX Conf, which is focused on improving developer experiences. 

“With VS Code in Gitpod developers get the most popular editing experience combined with all the benefits of a fully automated, cloud-based solution,” Gitpod wrote in a post.

Also, with Docker support and sudo privileges, developers can run containers within their workspace. 

The post SD Times news digest: .NET 6 Preview 3, WhiteSource announces new funding for app security, and Canonical adds full enterprise support for Kubernetes 1.21 appeared first on SD Times.

Read more:


NativeScript 8.0 launches with new Best Practices Guide

The latest version of the NativeScript framework is now available. NativeScript 8.0 features more streamlining of the core of the framework so that it can serve as a good foundation for future enhancements, as well as the release of a new Best Practices Guide.

According to the team, the previous release last Fall, NativeScript 7.0, was one of the largest structural updates and set the foundation for this release and other future releases. 

“We want to thank the incredible community for their input and support. The contributions provided during this cycle were astounding and you make working on NativeScript an absolute joy. In addition, the tremendous love you have shown on Open Collective, each and every contribution has helped make 8.0 a reality as well as paved the way for more exciting things to come,” the the NativeScript Technical Steering Committee wrote in a post.

Key new features in NativeScript 8.0 include: 

Apple M1 support
Accessibility support
CSS box-shadow support
CSS text-shadow support
A hidden binding property
An official eslint package
support for creative view development using the new RootLayout container

In addition to the first official NativeScript Best Practices Guide, the team also gave the NativeScript website and documentation a refresh.

“Over the years several distinct best practices have emerged when working with NativeScript and we took a moment to outline a few of the most fundamental ones you should be aware of to get the best end result out of your projects,” the Technical Steering Committee wrote.

The framework has two major versions released every year, to align with platform tooling updates. The next major release is expected in the Fall. 

More information can be found in the release notes, including details on how to upgrade. 

The post NativeScript 8.0 launches with new Best Practices Guide appeared first on SD Times.

Read more:


Developers reflect on challenges, feelings about remote work in pandemic year

Many companies have just surpassed the one-year anniversary of sending their employees home to work remotely as a safety measure for COVID-19. At the time, many thought this might be a temporary situation and folks would return to the office after a month or so, but one year later, many workers haven’t returned to the office. 

At the start, some developers struggled with remote work, while others thrived. Initial struggles included setting up and getting used to a distributed environment for the first time, feeling isolated from co-workers, and balancing work and home life — especially for those with young children when normal childcare options weren’t there or they had to help their kids with remote schooling alongside working their normal job. 

Benefits included the ones normally associated with working from home: increased productivity, more free time due to not having a commute, and the convenience of not having to go anywhere. 

RELATED CONTENT: How you organize your development teams matters

One year later, the benefits might have remained the same, but the negatives have compounded themselves for some. Those feeling isolated from coworkers at the start of the pandemic are now dealing with the mental toll of having been isolated not only from coworkers for a full year, but also from family and friends. 

“A couple days in a month or a week, no problem, but forever? Well, that just requires a lot more intention from yourself, your team, and your coworkers,” said Anthony Tran, software engineer  at Rollbar, a company that provides a continuous improvement platform. 

In fact, a survey released by Harness in August—5 months into remote working—revealed that 12% of developers were less happy in their roles than they were pre-pandemic. 

There are some who either didn’t like or struggled with working from home at the start, but have changed opinions over time as they’ve gotten more used to it and experimented and figured out things that worked for them. 

“During the beginning of the pandemic it was a struggle to stay motivated at home, there were so many distractions that it made it difficult work,” said Tyler Corwin, a developer at digital marketing company Figmints. “I was still able to hit all of my deadlines, but I didn’t get the same drive to get things done as I did while I was still in the office.  After the first month things got much better as my time management and organization got better.” For example, one thing he started doing was creating “fallback” tasks that he could work on while he waited on answers from his teammates on Slack or email. “This kept me working more efficiently and it’s something that I’ll continue to do even after we resume work back at the office,” Corwin said. 

Corwin added that while at the start he struggled with motivation, communication with team members, and keeping his kids from running into his workspace, now that the vaccine is here, he finds himself not wanting to return to the office five days a week. 

Maxime Basque, a developer at Unito, said that working remotely has been more good than bad. “While I do miss the camaraderie and things like being able to just ask something to someone directly without going the async route, as a generally anxious person I feel a lot calmer these days; not wasting 1h+ in transport every day, being able to concentrate with no distractions when I need to, having almost full control over my schedule, not having to think about lunch, etc. Eliminating the small things that caused a lot of stress were really beneficial for me,” he said.

Daniel Valdivia, an engineer at Kubernetes-native object storage company MinIO, appreciated the extra time he was able to spend with his family. “As the father of a 2-year-old, it has been awesome to get as much time as I have had with my child at such a young age.” 

Sachin Goyal, a principal engineer at Rollbar, also has had mostly positive experiences with working remotely. “I was able to use my time much more efficiently. Cutting down commute, lunch, and room-hopping is a huge time saver. Apart from that, I spent much more time with my 2-year-old and my wife,” he said. The one complaint he has, like many, is not being able to see colleagues regularly. 

Goyal feels that his team and manager have been very accommodating throughout this time. For example, since his daughter’s daycare is closed, he and his wife plan their day and meetings around making sure one of them is always with their daughter, and his company allowed him to have a more flexible schedule. “The ability to work at flexible hours is a huge benefit for me. Open communication was really helpful. Clearly stating the accommodations I wanted from my team and my manager and working with them to create a win-win was actually a ‘win’ for all us,” Goyal said.

Tran also noted that his managers have put in a lot of effort in trying to make remote work a positive experience, such as having lunch meetings on working efficiently and ergonomically, Zoom hangouts with trivia, group yoga, or playing whatever the latest popular Internet game was. “Also, I’d like to emphasize being candid with my managers and coworkers at Rollbar and sharing that I was losing motivation and focus, and feeling distant from the company and team was very helpful because they related that this was a common symptom of working remote and being able to share that, we were able to put more events/meetings/activities in place to help mitigate this feeling,” Tran said. 

Rico Pamplin, a lead process engineer at Lincoln Financial Group, also sees positive steps being taken by management to ensure employees are doing okay. “My manager also heavily promotes maintaining a healthy work/life balance and we have scheduled 1:1 sessions to ensure our professional requirements aren’t overstepping the personal ones.” He said that one way he ensures he’s maintaining his work/life balance is scheduling activities that require him to leave his workspace, because otherwise he’s found himself with days where he’s gotten super focused on a project and then suddenly realized it was 10 pm. 

As more people get vaccinated, many companies are in the process of discussing what that means for future plans, whether that means fully reopening offices, staying fully remote, or adopting a hybrid model. 

Valdivia said that for most of his career he’s been in a physical office and preferred it—because he doesn’t feel that the collaborative process of problem solving on a whiteboard translates to Zoom meetings, and in-person conversations can help build relationships that advance your career—but now has begun to rethink his views and see the value in a hybrid model. “I think it can recharge you, allow for deep work and add a few hours a week of family time without negatively impacting your productivity or the culture.” 

Basque said his company, Unito, will be adopting a hybrid model once the pandemic ends, where employees will be able to work from home two to three days per week. “The company believes this will allow us to maintain our strong culture, foster collaboration, but also adapt to the new reality and new needs of the team.”

Pamplin also sees the value in a hybrid model. “Now that I’ve been remote for a while, the luster has worn off a bit, but I definitely wouldn’t want to go back to primarily working in an office. I don’t mind the cubicle setting occasionally, but to do my job effectively it’s not a necessity, especially given that most of what I do is virtual, and my team is geographically distributed.”

The post Developers reflect on challenges, feelings about remote work in pandemic year appeared first on SD Times.

Read more:


4 reasons the future of cloud-native software is open source

Over the last several years, cloud-native development has transformed the way we think about software development. To speed up release cycles, build more powerful applications, and deliver superior user experiences at scale, more and more dev teams are embracing this modern approach to software development and building applications entirely in the cloud. 

According to the Cloud Native Computing Foundation (CNCF), there are at least 6.5 million cloud-native developers on the planet today, quite the increase from 4.7 million cloud-native developers that existed in Q2 2019. It’s all but certain that this number will continue to increase as we move further into the future.

With the increase in cloud-native developers, it comes as no surprise that more and more organizations are embracing cloud-native applications. One December 2020 study, for example, found that 86% of organizations were using cloud-native apps.

RELATED CONTENT: GitOps: It’s the cloud-native way

Similarly, though enterprises have long been wary of investing in open source applications, that’s all changing, too. According to Red Hat’s 2020 State of Enterprise Open Source Report, enterprises are increasingly investing in open source solutions. In fact, 95% of survey respondents say that open source is “strategically important” to their overall software strategy.

Add it all up, and the writing’s on the wall: If the future of software is cloud-native, it follows that the future of cloud-native is open source. Here are four reasons why. 

1. Community
The open-source community is vibrant, filled with developers from all walks of life who live all around the world. When you invest in the right open-source tools, not only do you gain access to the software itself, you also can leverage a diverse, global community of committed developers who are eager to help you through problems and troubleshoot issues. At the same time, it’s not uncommon for community members to add new features, build new integrations, or conduct security audits looking for vulnerabilities.

On the flipside, open source and open-core companies that shepherd open-source projects experience the same kinds of benefits. Not only can the community help them build a better, more secure, more feature-rich product, they can also help promote it to folks around the world. 

2. Freedom from vendor lock-in
According to the Flexera 2020 CIO Priorities Report, more than two-thirds of CIOs are concerned about getting locked in to cloud providers. 

This is another main driver of open-source adoption. Since open source solutions ship with open standards and full access to source code, enterprises are able to take hold of their own destiny instead of crossing their proverbial fingers and hoping that the vendor’s roadmap aligns with the interests of their business over the long term.

Simply put, open source enables organizations to avoid getting locked into any one vendor — and, by extension, getting coerced into paying hefty licensing fees for the foreseeable future.

3. Customizability
In addition to helping you avoid vendor lock-in, open-source solutions are highly customizable. It’s not uncommon for leading open-source solutions to have hundreds of integrations, built by both the open-source community and the open core company behind the project.

This is a huge deal. No two organizations are the same. Yet when an enterprise invests in a proprietary solution, they aren’t given access to source code and can’t reconfigure the software to meet their unique needs. Of course, some software vendors offer native integrations out of the box. But unless your team uses the tools the vendor supports and nothing else, chances are there will be at least one or two integrations on your wishlist.

When you go the open-source way, you control your own future. Your dev team can build whatever integrations they’d like. They can also fork the entire project and take it in an entirely new direction — one that makes it much easier to meet their objectives.

4. Security and control
In the age of high-profile data breach after high-profile data breach, security is more important than ever before. When you think about regulations and consumer protection laws like GDPR and CCPA — and the resulting potential penalties for non-compliance — the importance of security compounds even further.

There used to be a common misconception that proprietary software was inherently more secure than open source solutions because its source code was hidden from the public and, as such, was harder for bad actors to exploit.

But that misconception has evaporated in recent years. The fact of the matter is that — when you invest in proprietary tools — you’re essentially outsourcing your security stance to the vendor, trusting them that their software is secure.

By providing full access to source code and the ability to configure and extend the software however you like, open source enables organizations to take complete control over their security needs. In today’s day and age, this benefit can’t be understated.

Is your enterprise ready for the future?
Cloud-native solutions are the future of software because they enable organizations to unlock the true promise of the cloud. But in order to truly do that, software needs to be open source. Not only does open-source software give organizations access to powerful communities of contributors, it also lets them build the perfect tool for the job while retaining complete control over their security requirements.

To learn more about the transformative nature of cloud-native applications and open source software, check out KubeCon / CloudNativeCon Europe 2021, a virtual event hosted by the Cloud Native Computing Foundation, which takes place May 4–May 7. For more information or to register for the event, go here.

The post 4 reasons the future of cloud-native software is open source appeared first on SD Times.

Read more: