Categories
Software

Cloud Foundry releases Paketo Buildpacks to provide application language runtime support

The Cloud Foundry Foundation announced the launch of Paketo Buildpacks to make it easier for cloud-native developers to build code into a container by automatically detecting which language, frameworks, and runtimes are needed. 

Developers can then use buildpacks to then perform code compilation and configure the container for development, according to the foundation.

“Paketo Buildpacks promise ‘less time building, more time developing’ for developers and operators working within any platform that supports the CNB spec, and have been created specifically to build containers,” the foundation wrote in an announcement

RELATED CONTENT: 3 steps to becoming cloud native

Paketo Buildpacks are modular buildpacks — unlike past generations of buildpacks — that are  written in Go and provide language runtime support for applications. They leverage the Cloud Native Buildpacks framework to simplify image builds. 

The solution also ensures that upstream languages, runtimes, and frameworks are continuously patched in response to vulnerabilities and updates.

Paketo Buildpacks were created in collaboration with VMware for developers and operators that build and deploy applications to cloud-native platforms and build systems that support the Cloud Native Computing Foundation’s (CNCF) Cloud Native Buildpack (CNB) specification such as Cloud Foundry, Kubernetes, Tekton, and other technologies.

“As the use of buildpacks continues to grow throughout the cloud-native developer community, the Paketo Buildpacks project fills a critical gap in the ecosystem. The project brings together a well-curated collection of buildpacks upon which enterprise developers can rely,” said Chip Childers, executive director of the Cloud Foundry Foundation.

Current language packages exist for Node.js, Go, PHP, Java, and .NET Core. 

Additional details are available here.

The post Cloud Foundry releases Paketo Buildpacks to provide application language runtime support appeared first on SD Times.

Read more: sdtimes.com

Categories
News

SpaceX aborts launch attempt of sixth batch of Starlink satellites due to engine power issue

SpaceX was attempting to launch its sixth batch of Starlink internet broadband satellites, but the launch was aborted when the countdown timer reached zero. On the live feed of the launch, SpaceX engineers were heard to cite a “launch abort on high engine power,” and the announcer presenting the webcast said that it was indeed an abort related to Merlin engine power, and SpaceX later provided added detail, including that the sequence was auto-aborted by its system.

The announcer noted that the “vehicle appears to be in good health,” which SpaceX later confirmed, which should bode well for resetting for another attempt. SpaceX has a backup opportunity on Monday, but the actual next launch attempt is still to be determined, likely as SpaceX investigates and learns more about what exactly was behind the engine power issue and when it makes sense to try again, given conditions on the launch range.

Standing down today; standard auto-abort triggered due to out of family data during engine power check. Will announce next launch date opportunity once confirmed on the Range

— SpaceX (@SpaceX) March 15, 2020

This would’ve been a record fifth flight for the Falcon 9 booster used in this launch, as well as a first re-use of the fairing that protects the cargo. SpaceX has advised that it’ll reveal when it’ll make its net launch attempt once it can confirm those details, and we’ll provide that info once available.


Read more: feedproxy.google.com

Categories
Software

White House announces call to action for the tech community to contribute to COVID-19 dataset

The White House is issuing a call to action for AI experts to develop new text and data mining techniques to analyze the newly released COVID-19 Open Research Dataset (CORD-19).

The dataset is the most extensive machine-readable Coronavirus literature collection available was created with input from researchers and leaders from the Allen Institute for AI, Chan Zuckerberg Initiative (CZI), Georgetown University’s Center for Security and Emerging Technology (CSET), Microsoft, and the National Library of Medicine (NLM) at the National Institutes of Health.

To create the data set, Microsoft web-scale literature curation tools were used to identify and bring together worldwide scientific efforts and results, CZI provided access to pre-publication content, NLM provided access to literature content, and the Allen AI team transformed the content into machine-readable form, making the corpus ready for analysis and study, according to a post from the White House. 

RELATED CONTENT: Developers take on COVID-19 with open-source projects

Now, researchers are encouraged to submit the text and data mining tools and insights via the Kagle platform – a machine learning and data science community owned by Google Cloud.

“One of the most immediate and impactful applications of AI is in the ability to help scientists, academics, and technologists find the right information in a sea of scientific papers to move research faster. We applaud the OSTP, WHO, NIH and all organizations that are taking a proactive approach to use the most advanced technology in the fight against COVID-19,” said Dr. Oren Etzioni, chief executive officer of the Allen Institute for AI.

Sought after insights include the natural history, transmission, and diagnostics for the virus, management measures at the human-animal interface, lessons from previous epidemiological studies, and more.

“It’s all hands on deck as we face the COVID-19 pandemic,” said Eric Horvitz, chief scientific officer at Microsoft. “We need to come together as companies, governments, and scientists and work to bring our best technologies to bear across biomedicine, epidemiology, AI, and other sciences. The COVID-19 literature resource and challenge will stimulate efforts that can accelerate the path to solutions on COVID-19.”

The CORD-19 resource is available on the Allen Institute’s SemanticScholar.org website and will continue to be updated as new research is published in archival services and peer-reviewed publications.

The creators of the dataset recommend using metadata from the comprehensive file when available, instead of parsed metadata in the dataset. Please note the dataset may contain multiple entries for individual PMC IDs in cases when supplementary materials are available.

The post White House announces call to action for the tech community to contribute to COVID-19 dataset appeared first on SD Times.

Read more: sdtimes.com

Categories
News

Glisten uses computer vision to break down product photos to their most important parts

It’s amazing that in this day and age, the best way to search for new clothes is to click a few check boxes and then scroll through endless pictures. Why can’t you search for “green patterned scoop neck dress” and see one? Glisten is a new startup enabling just that by using computer vision to understand and list the most important aspects of the products in any photo.

Now, you may think this already exists. In a way, it does — but not a way that’s helpful. Co-founder Sarah Wooders encountered this while working on a fashion search project of her own while going to MIT.

“I was procrastinating by shopping online, and I searched for v-neck crop shirt, and only like two things came up. But when I scrolled through there were 20 or so,” she said. “I realized things were tagged in very inconsistent ways — and if the data is that gross when consumers see it, it’s probably even worse in the backend.”

As it turns out, computer vision systems have been trained to identify, really quite effectively, features of all kinds of images, from identifying dog breeds to recognizing facial expressions. When it comes to fashion and other relatively complex products, they do the same sort of thing: Look at the image and generate a list of features with corresponding confidence levels.

So for a given image, it would produce a sort of tag list, like this:

As you can imagine, that’s actually pretty useful. But it also leaves a lot to be desired. The system doesn’t really understand what “maroon” and “sleeve” really mean, except that they’re present in this image. If you asked the system what color the shirt is, it would be stumped unless you manually sorted through the list and said, these two things are colors, these are styles, these are variations of styles, and so on.

That’s not hard to do for one image, but a clothing retailer might have thousands of products, each with a dozen pictures, and new ones coming in weekly. Do you want to be the intern assigned to copying and pasting tags into sorted fields? No, and neither does anyone else. That’s the problem Glisten solves, by making the computer vision engine considerably more context-aware and its outputs much more useful.

Here’s the same image as it might be processed by Glisten’s system:

Better, right?

“Our API response will be actually, the neckline is this, the color is this, the pattern is this,” Wooders said.

That kind of structured data can be plugged far more easily into a database and queried with confidence. Users (not necessarily consumers, as Wooders explained later) can mix and match, knowing that when they say “long sleeves” the system has actually looked at the sleeves of the garment and determined that they are long.

The system was trained on a growing library of around 11 million product images and corresponding descriptions, which the system parses using natural language processing to figure out what’s referring to what. That gives important contextual clues that prevent the model from thinking “formal” is a color or “cute” is an occasion. But you’d be right in thinking that it’s not quite as easy as just plugging in the data and letting the network figure it out.

Here’s a sort of idealized version of how it looks:

“There’s a lot of ambiguity in fashion terms and that’s definitely a problem,” Wooders admitted, but far from an insurmountable one. “When we provide the output for our customers we sort of give each attribute a score. So if it’s ambiguous, whether it’s a crew neck or a scoop neck, if the algorithm is working correctly it’ll put a lot of weight on both. If it’s not sure, it’ll give a lower confidence score. Our models are trained on the aggregate of how people labeled things, so you get an average of what people’s opinion is.”

The model was initially aimed at fashion and clothing in general, but with the right training data it can apply to plenty of other categories as well — the same algorithms could find the defining characteristics of cars, beauty products and so on. Here’s how it might look for a shampoo bottle — instead of sleeves, cut and occasion you have volume, hair type and paraben content.

Although shoppers will likely see the benefits of Glisten’s tech in time, the company has found that its customers are actually two steps removed from the point of sale.

“What we realized over time was that the right customer is the customer who feels the pain point of having messy unreliable product data,” Wooders explained. “That’s mainly tech companies that work with retailers. Our first customer was actually a pricing optimization company, another was a digital marketing company. Those are pretty outside what we thought the applications would be.”

It makes sense if you think about it. The more you know about the product, the more data you have to correlate with consumer behaviors, trends and such. Knowing summer dresses are coming back, but knowing blue and green floral designs with 3/4 sleeves are coming back is better.

Glisten co-founders Sarah Wooders (left) and Alice Deng

Competition is mainly internal tagging teams (the manual review we established none of us would like to do) and general-purpose computer vision algorithms, which don’t produce the kind of structured data Glisten does.

Even ahead of Y Combinator’s demo day next week the company is already seeing five figures of monthly recurring revenue, with their sales process limited to individual outreach to people they thought would find it useful. “There’s been a crazy amount of sales these past few weeks,” Wooders said.

Soon Glisten may be powering many a product search engine online, though ideally you won’t even notice — with luck you’ll just find what you’re looking for that much easier.

(This article originally had Alice Deng quoted throughout when in fact it was Wooders the whole time — a mistake in my notes. It has also been updated to better reflect that the system is applicable to products beyond fashion.)

WTF is computer vision?


Read more: feedproxy.google.com

Categories
Software

SD Times news digest: AWS DeepComposer, React Native 0.62, and Nim programming language 1.2

Amazon announced AWS DeepComposer is now generally available. The company first announced the machine learning solution at AWS re:Invent last year.

AWS DeepComposer includes in-console training that enables users to train generative models without having to write any machine learning code. In addition, the composer is powered by  Generative Adversarial Networks (GANs) and allows users to train and optimize GAN models. 

“Until now, developers interested in growing skills in GANs haven’t had an easy way to get started. In order to help them regardless of their background in ML or music, we are building a collection of easy learning capsules that introduce key concepts, and how to train and evaluate GANs. This includes an hands-on lab with step-by-step instructions and code to build a GAN model,” AWS wrote in a blog post.

Stanford provides free coding resources
Stanford announced a free introductory coding course using Python during the COVID-19 pandemic. 

“The course is centered around engaging assignments and includes an optional final project. You won’t receive a grade in the course, and completing the experience doesn’t earn university credit. Instead, the main outcome is that you will have acquired a new and wonderful skill: how to program,” Stanford wrote. 

The student application deadline is April 8th. Additional details are available here.

React Native 0.62 now available
React Native 0.62 includes support for Flipper by default, as well as dark mode features, and Apple TV’s move to react-native-tvos. 

Flipper is a developer tool for debugging mobile apps that includes Metro Actions, a Crash Reporter, React DevTools, Network Inspector, Metro and Device Logs, and more. 

Additional details are available here.

Nim programming language 1.2
Nim version 1.2 adds GC:ARC – the main feature for this release – as well as new sugar macros that should help with writing some common tasks, and standard library additions and changes. 

If developers would like to upgrade to version v1.2, but they are relying on v1.0 behaviour, there is a command line switch –useVersion, which can be used with the newest Nim to simulate previous versions.

Additional details are available here.

The post SD Times news digest: AWS DeepComposer, React Native 0.62, and Nim programming language 1.2 appeared first on SD Times.

Read more: sdtimes.com

Categories
News

Y Combinator moves its online Demo Day forward one week

Just days ago, Y Combinator announced that its upcoming Demo Day event would be moving online due to “growing concern over COVID-19.” The event, previously planned to span across two days at San Francisco’s Pier 48 building, would instead be hosted entirely online on March 23rd.

More changes this evening: YC is shifting Demo Day forward one full week, from March 23rd to March 16th.

In a blog post on the change, Y Combinator CEO and partner Michael Seibel cites an “accelerated” pace from investors in recent days as having encouraged the move:

Over the last few days, a large number of investors have accelerated their outreach to our current batch of founders. They are moving quickly to make investment decisions, and we’re going to match their pace and accelerate our schedule by one week. YC W20 online Demo Day will now be on March 16.

On March 16, the YC Demo Day website will go live, a modified version of the website that investors and founders have used over the past five years. Through the website, investors will have access to a single-slide summary, a short description of the company, and a team bio. They can sort companies by industry and geography, and will be able to export the list of companies to a spreadsheet.

While YC initially said that the pitches each company traditionally does live onstage for Demo Day would be “pre-recorded and released to all investors at the same time,” the announcement of the sooner-than-expected Demo Day only mentions slide summaries, company descriptions, and team bios — suggesting plans might have changed a bit to accommodate the new schedule.

Y Combinator confirmed to me that the Demo Day site will not have video presentations.


Read more: feedproxy.google.com

Categories
Software

Marriott suffers second data breach affecting 5.2 million guests

Two years after a major data breach that exposed 339 million guest records and cost Marriott $124 million in GDPR violation fines, the company has suffered another, albeit smaller security breach. 

The hospitality company announced that in February 2020, the company discovered that a large number of guest information might have been accessed using the login credentials of two employees at one of Marriott’s franchise locations. Marriott believes this activity started in mid-January and the accounts were disabled upon discovery. The company also immediately launched an investigation, implemented heightened monitoring, and worked to inform and assist guests.

At this point, Marriott believes that contact details (names, mailing addresses, emails, and phone numbers), loyalty account information (account numbers and point balances), additional personal details (company, gender, and birthday day and month), partnerships and affiliations (linked airline loyalty programs and numbers), and stay preferences (such as room and language preferences) of 5.2 million guests were exposed. While investigation is still underway, Marriott does not currently believe that account passwords or PINs, payment card information, passport information, national IDs, or driver’s license numbers have been exposed.

RELATED CONTENT: Marriott fined $124 million for 2018 data breach

Marriott notified guests involved in the breach by email on March 31. Included in the email are details on how to enroll in a personal information monitoring service that Marriott is offering. They also set up a dedicated website and call center with additional information. 

The company also noted that it carries cyber insurance and is working with insurers to assess coverage. At this time, Marriott does not believe the total cost of this incident will be significant. 

Kevin Lancaster, GM of security solutions at Kaseya, believes that the nature of this breach presents a good opportunity for companies to prioritize cybersecurity awareness training, particularly phishing training. “One of the most effective types of active training is phishing simulation,” he said. “As the name implies, you mail out simulated phishing attempts to people in your organization and track their response. This helps you to get a better sense of security awareness of individuals in your organization. While one employee might be on top of their game, another might be submitting data to every phishing email that he gets. So it’s best to direct limited training resources where they’re most needed. I’ve also seen cases where just knowing that phishing simulation goes on in the organization and that their management sees the results improves people’s caution with clicking on sketchy emails.” 

The post Marriott suffers second data breach affecting 5.2 million guests appeared first on SD Times.

Read more: sdtimes.com

Categories
News

Nvidia acquires data storage and management platform SwiftStack

Nvidia today announced that it has acquired SwiftStack, a software-centric data storage and management platform that supports public cloud, on-premises and edge deployments.

The company’s recent launches focused on improving its support for AI, high-performance computing and accelerated computing workloads, which is surely what Nvidia is most interested in here.

“Building AI supercomputers is exciting to the entire SwiftStack team,” says the company’s co-founder and CPO Joe Arnold in today’s announcement. “We couldn’t be more thrilled to work with the talented folks at NVIDIA and look forward to contributing to its world-leading accelerated computing solutions.”

The two companies did not disclose the price of the acquisition, but SwiftStack had previously raised about $23.6 million in Series A and B rounds led by Mayfield Fund and OpenView Venture Partners. Other investors include Storm Ventures and UMC Capital.

SwiftStack, which was founded in 2011, placed an early bet on OpenStack, the massive open-source project that aimed to give enterprises an AWS-like management experience in their own data centers. The company was one of the largest contributors to OpenStack’s Swift object storage platform and offered a number of services around this, though it seems like in recent years it has downplayed the OpenStack relationship as that platform’s popularity has fizzled in many verticals.

SwiftStack lists the likes of PayPal, Rogers, data center provider DC Blox, Snapfish and Verizon (TechCrunch’s parent company) on its customer page. Nvidia, too, is a customer.

SwiftStack notes that it team will continue to maintain an existing set of open source tools like Swift, ProxyFS, 1space and Controller.

“SwiftStack’s technology is already a key part of NVIDIA’s GPU-powered AI infrastructure, and this acquisition will strengthen what we do for you,” says Arnold.

SwiftStack Raises $16M For Its Enterprise Object Storage Service


Read more: feedproxy.google.com

Categories
Software

Report: DevOps needs to undergo a human transformation

While most of the industry is undergoing a digital transformation, the CEO of the DevOps Institute Jayne Groll stresses the need for a human transformation. According to Groll, DevOps initiatives are focusing too much energy on technology and not enough effort with skills. 

The DevOps Institute released the Upskilling 2020: Enterprise DevOps Skills Report to find the most in-demand skills needed for DevOps. The data was based on more than 1,200 respondents. 

“Human transformation is the single most critical success factor to enable DevOps practices and patterns for enterprise IT organizations,” said Groll. “Traditional upskilling and talent development approaches won’t be enough for enterprises to remain competitive because the increasing demand for IT professionals with core human skills is escalating to a point that business leaders have not yet seen in their lifetime. We must update our humans through new skill sets as often, and with the same focus, as our technology.”

RELATED CONTENT: Creating a DevOps culture

According to the report, more than 50% respondents are having trouble on their DevOps transformation journeys, and 58% cited finding skilled DevOps individuals are a challenge. Another 48% find it is difficult to retain skilled DevOps professionals. 

“The DevOps human and the associated skills plays a huge role in enabling an organization and its culture towards agile innovation, cross-functional collaboration and risk-taking to support digital operating models such as DevOps,” the report stated. “The fight for talent is not new as hiring managers are nervous about a talent gap in their teams relative to human, functional, technical and process skills and knowledge. Individuals in current positions are eager to update their skills. New job entrants are needing to know how to compete with skills and talents for todays and future opportunities.”

The institute found that the top skills necessary to create a “DevOps Human” are process skills and knowledge, automation, and human skill. 

In addition, the DevOps Institute found that not enough business leaders are focused on upskilling talent. More than 38% respondents’ organizations don’t have an upskilling program, 21% are working towards on and 7% don’t even know if on is available to them. Thirty-one percent found their company is already implementing a formal upskilling programming. 

As part of the report, the DevOps Institute is also introducing the “e-shaped” human of DevOps. Last year, the 2019 skills report focused on “t-shape” humans, which are specialists who have disciplinary depth in one area such as the cloud, but have the ability to reach out to other disciplines. “T-shaped individuals supplement their depth of specific knowledge (the deep stem of the T) with a wide range of general knowledge (the general top of the T). The need for T-shaped talent is being driven by the increasing requirement for speed, agility and quality software from the business,” the 2019 report stated. 

This year’s report highlighted the need to evolve “t-shape” humans to “e-shape” humans, which includes “4-Es:” experience, expertise, exploration and execution. Additionally, there are horizontal and vertical skills an “e-shaped” DevOps human must posses. The horizontal skills include automation, functional, knowledge and technical skills while the vertical skillset includes flow, understanding of different practices such as Scrum and Value Stream Mapping as well as human skills like collaboration and interpersonal skill.s 

“The time is now to upskill your DevOps teams and individuals, however, this must be done across more than technical and functional skills,” said Eveline Oehrlich, research director at the institute. “We already saw a significant demand for a variety of human must-have skills in our 2019 research and this year we saw a tremendous increase across all human skills e.g. collaboration, interpersonal skills, empathy and creativity to name a few. The most important though is the increase in value placed on the human skills, which comes from the management and business leaders in our survey. Our research shows that a mindset shift is happening with the transition from ‘soft’ skills to ‘human’ skills, but more importantly, today’s leaders must change their mindset to recognize the value human skills will bring to a team and organization.”

The post Report: DevOps needs to undergo a human transformation appeared first on SD Times.

Read more: sdtimes.com

Categories
News

The Mars 2020 rover has a new name: Perseverance

The next NASA rover to go to Mars has shed its code name and assumed a new one, sourced from the ingenuous youth of our nation. Keeping with the tradition of using virtues as names, the Mars 2020 rover will henceforth be known as “Perseverance.”

This particular virtue was suggested by Alexander Mather, a middle-schooler in Virginia. He and some 28,000 other kids proposed names in an essay contest last year. The final nine contenders were: Endurance, Tenacity, Promise, Vision, Clarity, Ingenuity, Fortitude, Courage and, of course the winner, Perseverance.

The name is perhaps the most apropos, with the possible exception of Endurance, given the track record of Mars rovers vastly outliving their official mission length. Like some kind of scientific Gilligan’s Island, Opportunity famously set out for a 90-day tour of the Martian surface and ended up trundling around for over 14 years before finally losing power for good during a planet-scale sandstorm.

Opportunity Mars rover goes to its last rest after extraordinary 14-year mission

These rovers don’t just keep going effortlessly, of course; the teams must constantly exert their ingenuity to rescue, redirect and reprogram the distant robotic platforms. It was this aspect that seems to have caught the space agency’s eye.

“Like every exploration mission before, our rover is going to face challenges, and it’s going to make amazing discoveries. It’s already surmounted many obstacles to get us to the point where we are today,” said Thomas Zurbuchen, NASA’s associate administrator of the Science Mission Directorate, in a news release. “Alex and his classmates are the Artemis Generation, and they’re going to be taking the next steps into space that lead to Mars. That inspiring work will always require perseverance.”

The kid, Mather, didn’t just do this as some in-class activity mass-emailed by the teacher. He went to space camp in 2018 and had his mind blown by the Saturn V rocket he saw there. Now, having won the naming contest, he’ll get to go with his family to Cape Canaveral to watch the rover launch this summer.

“This was a chance to help the agency that put humans on the Moon and will soon do it again,” Mather said. “This Mars rover will help pave the way for human presence there and I wanted to try and help in any way I could. Refusal of the challenge was not an option.”

NASA’s Mars 2020 rover rests on its own six wheels for the first time

In acknowledgement of the other kids who entered the contest, Perseverance will be equipped with a chip inscribed with the 8 semifinalists’ names, as well as 155 more semi-finalists’ proposals — in letters a thousandth the width of a human hair, but still.

We’ll have more coverage of the mission as launch time approaches, but in the meantime you can keep up with the latest at the obligatory and always delightful first-person-rover Twitter account.


Read more: feedproxy.google.com