Categories
News

Indonesian startup Pomona raises $3 million to help brands increase engagement with cashback offers

Pomona, an Indonesian startup that creates omnichannel marketing and sales software for consumer brands, announced today it has raised a Series A-2 of $3 million led by Vynn Capital. The round also included participation from new investors Ventech China and Amand Ventures, and returning ones Stellar Kapital and Central Capital Ventura.

(As for why it’s an A-2, co-founder and CEO Benz Budiman explained the company already raised an undisclosed pre-A round, which was reported in the media last year as a Series A. To avoid confusion about this funding since it is not a Series B, they are referring to it as an A-2).

Founded in 2016 by Budiman and CTO Ari Suwendi, Pomona’s software enables brands to offer cashback incentives to Indonesian consumers and includes tools for analyzing customer engagement, offline sales conversion and the effectiveness of marketing and advertising campaigns. It was developed specifically for brands in two categories: consumer packaged goods, such as toiletries and cleaning supplies, and fast-moving consumer packaged goods, or items with a short shelf life and high turnover like food and seasonal items.

The company currently works with more than 50 brands, including Unilever, Japfa, ABC President, Sosro, Frisian Flag and Sungai Budi.

Pomona’s new funding will be used to add new services with the goal of becoming an end-to-end solution, says Budiman. “For brands, we want to provide more information and data points so they can better understand local Indonesian consumers and tailor their engagement strategies to improve outreach efficiency, cut costs and better address their needs.”

The company also has plans to expand into new markets in Southeast Asia, but is keeping which countries under wraps for now.

Pomona’s solutions let customers redeem cashback offers by scanning a receipt with the Pomona app, or its partners can use Pomona’s white-label solutions to create their own branded cashback rewards app. In addition to increasing sales, Pomona’s solutions can help increase offline-to-online conversion rates, since most purchases are still made in brick-and-mortar stores. This gives companies a new way to examine what motivates customer engagement and their purchasing behavior.

In a press statement, Vynn Capital founding partner Victor Chua said, “As a major driver for private consumption in Southeast Asia, Indonesia is an increasingly important market for global brands. Understanding the behavioral traits of local consumers is essential for brands entering and operating in this dynamic market as consumers become more educated and preferential in their spending options.”


Read more: feedproxy.google.com

Categories
Software

Going ‘lights-out’ with DevOps

People sometimes describe DevOps as a factory. It’s a good analogy. Like a factory, code goes go in one end of the DevOps line. Finished software comes out the other. 

I’d take the idea one step further. In its highest form, DevOps is not just any factory, but a ‘lights-out’ factory.

Also called a “dark factory,” a lights-out factory is one so automated it can perform most tasks in the dark, needing only a small team of supervisors to keep an eye on things in the control room. That’s the level of automation DevOps should strive for. 

In a lights-out DevOps factory, submitted code is automatically reviewed for adherence to coding standards, static analysis, security vulnerabilities and automated test coverage. After making it through the first pass, the code gets put through its paces with automated integration, performance, load and end-to-end tests. Only then, after completing all those tests, is it ready for deployment to an approved environment. 

RELATED ARTICLES:
7 essential best practices to follow when adopting a DevOps model
SKIL: A framework for humans, not the machines of DevOps

As for those environments, the lights-out DevOps factory automatically sets them up, provisions them, deploys to them and tears them down as needed. All software configuration, secrets, certificates, networks and so forth spring into being at deploy time, requiring no manual fidgeting with the settings. Application health is monitored down to a fine-grained level, and the actual production runtime performance is visible through intuitive dashboards and queryable operator consoles (the DevOps version of the factory control room). When needed, the system can self-heal as issues are detected.

This might sound like something out of science fiction, but it’s as real as an actual, full-fledged lights-out factory. Which is to say, “real, but rare.” Many automated factories approach lights-out status, but few go all the way. The same could be said of DevOps.

The good news is that you can design a basic factory line that delivers most of the benefits of a “lights-out” operation and isn’t too hard to create. You’ll get most of the ROI just by creating a DevOps dark factory between production and test.

Here is a checklist for putting together your own “almost lights-out” DevOps solution. Don’t worry. None of these decisions are irreversible. You can always change your mind. It will just take some rework.

1. IaaS or PaaS or containers – I recommend PaaS or Containers. Infrastructure as a Service has its place, but it has its downsides on price-point and configuration management. When you’re running a VM, it’s always on, so your spend for the resource is 100 percent though your utilization isn’t maxed out, so you’re paying to keep the VM running even while it’s not in use. The setup and configuration are also more complex, as you have to deploy the bare-metal instance and then deploy a configuration. Lastly, running IaaS, it’s all too easy to just run bespoke VMs and fall back into old habits. I’m a big fan of PaaS because you get a nice price point and just the right amount of configurability, without the added complexity of full specification. Containers are a nice middle ground. The spend for a container cluster is still there, but if you’re managing a large ecosystem, the orchestration capabilities of containers could become the deciding factor.

2. Public cloud or on-premises cloud – I recommend public cloud. Going back to our factory analogy, a hundred years ago factories generated their own power, but that meant they also had to own the power infrastructure and keep people on staff to manage it. Eventually centralized power production became the norm. Utility companies specialized in generating and distributing power, and companies went back to focusing on manufacturing. The same thing is happening with compute infrastructure and the cloud providers. The likes of Google, Amazon and Microsoft have taken the place of the power companies, having developed the specialized services and skills needed to run large data centers. I say let them own the problem while you pay for the service.

There are situations where a private cloud can make sense, but it’s largely a function of organizational size. If you’re already running a lot of large data centers, you may have enough core infrastructure and competency in place to make the shift to private cloud. If you decide to go that route, you absolutely must commit to a true DevOps approach. I’ve seen several organizations say they’re doing “private cloud” when in reality they’re doing business as usual and don’t understand why they’re not getting any of the temporal or financial benefits of DevOps. If you find yourself in this situation, do a quick value-stream analysis of your development process, compare it to a lights-out process, and you’ll see nothing’s changed from your old Ops model.

3. Durable storage for databases, queues, etc. – I recommend using a DB service from the cloud provider. Similar to the decision between IaaS and PaaS, I’d rather pay someone else to own the problem. Making any service resilient means having to worry about redundancy and disk management. With a database, queue, or messaging service, you’ll need a durable store for the runtime service. Then, over time, you’ll not only have to patch the service but take down and reattach the storage to the runtime system. This is largely a solved problem from a technological standpoint, but it’s just more complexity to manage. Add in the need for service and storage redundancy and backup and disaster recovery, and the equation gets even more complex. Again, the cloud providers are more than willing to own those problems, and offer cost-effective, scalable solutions for common distributed services that need high durability.

4. SQL vs. NoSQL – Many organizations are still relational database-centric, as they were in the 90’s and 00’s, with the RDBMS the center of the enterprise universe. Relational still has its place, but cloud-native storage options like table, document, and blob provide super-cheap high-performance options. I’ve seen many organizations that basically applied their old standards to the cloud, and said, “Well, you can’t use blob storage because it’s not an approved technology,” or “You can’t use serverless because it’s an ‘unbounded’ resource.” That’s the wrong way to do it. You need to re-examine your application strategy to use the best approach for the price point.

I once had a client whose data changed fairly slowly (every few weeks) but had to be accessed much more frequently. First they tried querying the same static data with the same queries, over and over. The performance was OK, but execution time went down significantly when the DB cache was primed. Then there was a push to give the DB instance more RAM so it could hold more data in the cache. We offered an alternative where we just precomputed static read models and dumped them in blob storage. The cost of the additional storage was a couple dollars a month, where increasing the specs of the DB would have cost more than a hundred a month. We achieved faster performance for less cost, but it required re-evaluating our approach.

5. Mobile – Mobile builds are one of the things that can throw you for a loop. Android is easy, Mac is a little more complicated. You’ll either need a physical Mac for builds, or if you go with Azure DevOps, you can have it run on a Microsoft Mac instance in Azure. Some organizations still haven’t figured out that they need a Mac compute strategy. I once had a team so hamstrung by corporate policy, they were literally trying to figure out how to build a “hack-intosh” because the business wanted to build an iOS app but corporate IT shot down buying any Macs. Once we informed them we couldn’t legally develop on a “hack-intosh,” they just killed the project instead of trying to convince IT to use Mac infrastructure. Yes, they abandoned a project, with a real business case and positive ROI because IT was too rigid.

6. DB versioning – Use a tool like Liquibase or Flyway. Your process can only run as fast as your rate-limiting step, and if you’re still versioning your database by hand, you’ll never go faster than your DBAs can execute scripts. Besides, they have more important things to do.

7. Artifact management, security scanning, log aggregation, monitoring – Don’t get hung up on this stuff. You can figure it out as you go. Get items in your backlog for each of these activities and have a more junior DevOps resource ripple each extension through to the process as its developed.

8. Code promotion – Lay out your strategy to go from Dev to Test to Stage to Prod, and replace any manual setup like networking, certificates and gateways with automated scripts.

9. Secrets – Decide on a basic toolchain for secrets management, even if it’s really basic. There’s just no excuse for storing secrets with the source control. There are even tools like git-secret, black-box, and git-crypt that provide simple tooling and patterns for storing secrets encrypted.

10. CI – Set up and configure your CI tool, including a backup / restore process. When you get more sophisticated, you’ll actually want to apply DevOps to your DevOps, but for now just make sure you can stand up your CI tool in a reasonable amount of time, repeatedly, with backup.

Now that you’ve made some initial technology decisions and established your baseline infrastructure, make sure you have at least one solid reference project. This is a project you keep evergreen and use to develop new extensions and capabilities to your pipelines. You should have an example for each type of application in your ecosystem. This is the project people should refer to when they want to know how to do something. As you evolve your pipelines, updated this project with the latest and greatest features and steps.

For each type of deployment — database, API, front end and mobile — you’ll want to start with a basic assembly line. The key elements to your line will be Build, Unit Testing, Reporting, Artifact Creation. Once you have those, you’ll need to design a process for deploying an artifact into an environment (i.e. deploying to Test, Stage, Prod) with its runtime configuration.

From there, keep adding components to your factory. Choose projects in the order that gets you the most ROI, either by eliminating a constraint or reducing wait time. At each stage, try to make “everything as code.” Always create both a deployment and rollback and exercise the heck out of it all the time.

When it comes to tooling, there are more than enough good open-source options to get you started.

To sum up, going lights-out means committing to making everything code, automated, and tested. You may not get there with every part of your production line, but just by tackling the basics, you’ll be surprised how much you can get done in the dark. 

The post Going ‘lights-out’ with DevOps appeared first on SD Times.

Read more: sdtimes.com

Categories
Technology Videos

THE MOST INGENIOUS WORKERS AND TOOLS THAT ARE AT THE NEW LEVEL


For copyright matters please contact us at: copymanager.mn@gmail.com

BRAIN TIME ► https://goo.gl/tTWgH2

Read more: youtube.com

Categories
News

Original Content podcast: We’re not on the same page about ‘Frankenstein’s Monster’s Monster, Frankenstein’

Just to get this out of the way: “Frankenstein’s Monster’s Monster, Frankenstein” is a great title. In fact, it’s probably the best thing about the new comedy special on Netflix .

That’s not a complaint about the special itself, which stars David Harbour (a.k.a. Chief Hopper on “Stranger Things”), as both David Harbour Jr — an actor taking on the role of Frankenstein in a play also called “Frankenstein’s Monster’s Monster, Frankenstein” — and David Harbour III, an actor who investigates his father’s life decades later.

If this sounds needlessly complicated don’t worry. As we explain on the latest episode of the Original Content podcast, the plot mostly serves as a springboard lots for jokes about actorly jealousy, Chekhov’s gun and the fact that no one can remember that Frankenstein and his monster are two different people. Anthony and Darrell, at least, found the whole thing to be pretty darn delightful.

Jordan, on the other hand, was baffled and unimpressed, and no matter how much time her co-hosts spent over-explaining the various gags, we couldn’t win her over.

In addition to our review, we discuss Netflix’s recent earnings report and try to figure out why, for one of the first times in its history, the streaming service reported a net loss in U.S. subscribers.

You can listen in the player below, subscribe using Apple Podcasts or find us in your podcast player of choice. If you like the show, please let us know by leaving a review on Apple. You can also send us feedback directly. (Or suggest shows and movies for us to review!)

If you’d like to skip ahead, here’s how the episode breaks down:
0:00 Intro
1:50 Netflix subscriber numbers
22:53 “Frankenstein’s Monster’s Monster, Frankenstein” review


Read more: feedproxy.google.com

Categories
Software

SD Times news digest: EPIC challenges Facebook’s FTC settlement, Khronos OpenXR 1.0 specification released, and OpenPDF 1.3

The Electronic Privacy Information Center (EPIC) is challenging the Facebook’s $5 billion settlement with the FTC in court, stating that the settlement is “insufficient to address the concerns originally identified by EPIC and the consumer coalition, as well as those findings established by the Commission.”

“The proposed order wipes Facebook’s slate clean without Facebook even having to admit guilt for its privacy violations,” the group’s complaint said.

EPIC filed a motion to block approval of the settlement, and said that the punishments fail to ensure consumer privacy. A provision within the settlement would give Facebook immunity from any complaints that the company violated a 2011 FTC settlement order and close a federal investigation into whether Facebook violated that settlement. 

The 2011 order required Facebook to take several steps to make sure it lives up to its promises in the future, including giving consumers clear and prominent notice and obtaining consumers’ express consent before their information is shared beyond the privacy settings they have established, according to the FTC’s website. 

Khronos OpenXR 1.0 specification released for the AR and VR ecosystem
The Khronos Group announced the ratification and public release of the OpenXR 1.0 specification with publicly available implementations and enhancements to the ecosystem. 

“Our work continues as we now finalize a comprehensive test suite, integrate key game engine support, and plan the next set of features to evolve a truly vibrant, cross-platform standard for XR platforms and devices. Now is the time for software developers to start putting OpenXR to work,” said Brent Insko, OpenXR working group chair and lead XR architect at Intel. 

The company also announced that OpenXR implementations are shipping this week including  a Monado OpenXR open-source implementation from Collabora, the OpenXR runtime for Windows Mixed Reality headsets from Microsoft and an Oculus OpenXR implementation for Rift.

OpenPDF 1.3 now available
OpenPDF 1.3 has been released with new improvements. OpenPDF is a free Java library for creating and editing PDF files with LGPL and MPL open source license.

This includes the modernization of OpenPDF to use more modern Java features, a bugfix check font size before drawing string and fix using Document in try-with-resources functionality.

Waymo channels evolutionary selection in training self-driving cars 
Waymo’s explained the importance of neural networks and the experiments it is undergoing in a research collaboration with Google’s DeepMind to use neural networks in self-driving cars to  perform many driving tasks. 

A neural network learns by continually attempting any task and is graded on whether it performs the task or not, the company explained. Through repetition, a neural network improves its performance based on these grades and can be used in self-driving vehicles to predict how others will behave on the road and plan a car’s next moves. Also, Waymo explained population based training (PBT) is a recent approach that jointly optimized neural network weights and hyperparameters which periodically copies weights of the best performers and mutates hyperparameters during training.

“By incorporating PBT directly into Waymo’s technical infrastructure, researchers from across the company can apply this method with the click of a button, and spend less time tuning their learning rates,” the company wrote in a post.

The post SD Times news digest: EPIC challenges Facebook’s FTC settlement, Khronos OpenXR 1.0 specification released, and OpenPDF 1.3 appeared first on SD Times.

Read more: sdtimes.com

Categories
Technology Videos

PERFECTIONISTS AT WORK THAT WILL AMAZE YOU


For copyright matters please contact us at: copymanager.mn@gmail.com

Mind Warehouse ► https://goo.gl/aeW8Sk

Read more: youtube.com

Categories
Software

Facing the challenges of continuous testing

Continuous delivery facilitates the release of software to production at any time, supporting Agile practices and cutting development release time from several weeks to just a few hours.

Successful Continuous delivery means being able to roll out a working configuration to production at any time. You’ve got a large and ever-growing list of “application endpoints” that must be consistently working in order for you to achieve Continuous Delivery. 

While development times have increased, quality hasn’t kept up. Companies are expected to develop faster, release faster, but face issues of risk and compliance if testing isn’t run properly. Not factoring testing into your continuous delivery process can lead to application crashes and customer service issues. 

Continuous testing is the process of running automated unit tests throughout the software delivery cycle, as applications are being developed and all the way to production. This includes larger performance tests and API monitoring both before and after a release, so that you are constantly testing that what you have built is achieving its purpose.

RELATED ARTICLES:
Continuous testing demands holistic training
Who owns Continuous testing?

In order for these small and continuous tests to be most effective, they should be run by the people who are developing the application themselves, ideally with the testing toolset that they are most familiar with.

Why is continuous testing important?
As software release cycles shrink from years and months to weeks and days, our testing practices need to evolve to keep up and ensure that your site and application are running. Any break in an application can lead to unhappy customers, and a huge loss in revenue.

A recent Forrester Total Economic Impact Study on Continuous Testing found that implementing true Continuous Testing can reduce QA bottlenecks, saving an organization up to $7 million in operating costs over a three-year period. 

The survey also found that improved efficiency in the requirements design process meant less unnecessary work, and more time freed up through a reduced number of meetings. Efficiency in collecting application requirements results in a value of $1.4 million over that same three-year period, according to the study. 

But continuous testing isn’t easy to implement
A recent report by Capgemini and Sogeti on continuous testing has found that although many enterprise companies embrace continuous testing, adoption and implementing best practices have been a challenge. While 57 percent of respondents claimed to have “fully embraced continuous testing”, only 17 percent were using automation during the testing process, and over 80 percent claimed that their teams were spending over a third of their testing time on setting up their test environments. 

The problems that software teams faced when it came to automating their testing for a full continuous testing strategy included:

Managing test data
Moving to Agile development processes
Legacy tool sets that did not support agile transformation

So what can teams do to successfully implement continuous testing?
There are several things that software teams can do to successfully implement continuous testing. These include: an Agile mindset, a move to shift left (and shift right) processes, and  adopting the right tools for digital transformation and continuous testing success

Shifting testing left (and right)
If you are shifting your processes towards continuous delivery, but your testing is stuck at the end of your delivery process, then testing will become a bottleneck in your delivery. 

By shifting your testing left (and shift right testing), developers gain greater visibility before software is pushed to production, by allowing them to test and detect bugs and errors at a faster pace, before it’s too late.  

Developers can also manage the test data creation based on previous statistics that enables improved bug fix turnaround time and decision-making. 

You can read more here about how the NY Times’ engineering team ran its agile transformation, putting testing at the center of their processes.

Utilizing service virtualization to eliminate dependencies
Not having access to the right environment or the right service can slow down your testing. By running tests with on-demand virtual services, teams can easily virtualize parts of the system, whenever they need, if they are not under test, or unavailable. This means testing can move ahead in a timely manner, letting you get discrete insight into the quality and performance of what you’re testing. 

You can read more here about how Fidelity used Service Virtualization to speed up their software delivery development, and how they could test out ideas early on in the development process.

Use the right toolset
Adopting new testing techniques and tools require transformation. In order to successfully implement continuous testing, you need technical skills, organizational structures, and collaboration patterns between various stakeholders.

Testing skills need to evolve to take on the increasingly demanding asks of modern-day testing, and you need to create a culture of people who want to join this digital transformation. 

An essential part is working with the tools that your team is already using, so it will be easier for those developers to adopt and run their own tests. These tools should be open-source friendly, easy to use and work with a developer’s already existing toolset. If you limit who can test, you won’t be able to really implement continuous testing. 

You can read more here about how Piksel made the transformation to continuous testing, by getting their developers involved in testing at every stage of the software development process, with tools that were easy to use and code with. 

By making your team comfortable with these tools, with easy adoption, online training and community support , you can support your team in leading this journey to shift to a modern, continuous testing approach.

The post Facing the challenges of continuous testing appeared first on SD Times.

Read more: sdtimes.com

Categories
News

NASA’s Orion crew capsule is officially complete and ready to prep for its first Moon mission

NASA’s 50th anniversary celebrations weren’t limited to just remembrances of past achievements – the space agency also marked the day by confirming that the Orion crew capsule that will bring astronauts back to the Moon for the first time since the end of the Apollo program is ready for its first trip to lunar orbit, currently set for sometime after June 2020.

Orion won’t be carrying anyone for its first Moon mission – instead, as part of Artemis 1, it’ll fly uncrewed propelled by the new Space Launch System, spend a total of three weeks in space including six days orbiting the Moon, and then return back to Earth. Once back, it’ll perform a crucial test of high speed re-entry into Earth’s atmosphere, to demonstrate the efficacy of the Orion capsule’s thermal shielding prior to carrying actual crew for Artemis 2 in 2022, and ultimately delivering astronauts back to the lunar surface with Artemis 3 in 2024.

This isn’t Orion’s first trip to space, however – that happened back in 2014 with Exploration Flight Test 1, another uncrewed mission in which Orion spent just four-hours in space, orbiting the Earth twice and then returning to ground. This mission used a Delta IV rocket instead of the new SLS, and was meant to test key systems prior to Artemis.

1 1

On the anniversary of the Apollo Moon landing, the Lockheed Martin-built Orion capsule for the Artemis 1 mission to the Moon is declared finished.

NASA contractor Lockheed Martin, which is responsible for the Orion spacecraft’s construction, also noting that the combined crew module and service module are currently being properly integrated, and then will undergo a series of tests before returning to Kennedy Space Center in Florida by the end of the year to begin the final preparations before launch.


Read more: feedproxy.google.com

Categories
Software

SD Times news digest: Intel’s Reinforcement Learning Coach 1.0, Amazon’s educational tools for ML, and Apple acquires Intel’s smartphone modem business

The latest release of Intel’s Reinforcement Learning Coach incorporates newer and stronger RL algorithms, and maintains and extends the APIs to improve usability. 

“Batch reinforcement learning allows RL to learn from a dataset, while also exercising the dataset for off-policy evaluation of the goodness of the learned policy,” Intel wrote in a post

The new release also features several new algorithms, support for Batch Reinforcement Learning, improved documentation, bug fixes and new APIs that enable the use of Coach as a Python library.

Amazon provides educational tools for machine learning
Amazon partnered up with edX to introduce an interactive course to help users get started with machine learning called Amazon SageMaker: Simplifying Machine Learning Application Development. 

Amazon explained that this is “an intermediate-level digital course that provides a baseline understanding of ML and how applications can be built, trained, and deployed using Amazon SageMaker.” 

Amazon SageMaker is a fully managed, modular service that helps users prepare data, choose an algorithm, train the model, tune and optimize it for deployment. The interactive course was developed by AWS experts. 

The full details are available here.

Apple acquires Intel’s smartphone modem business
Apple announced that it is acquiring the majority of Intel’s smartphone modem business for $1 billion and is expected to close in Q4 2019.

“This agreement enables us to focus on developing technology for the 5G network while retaining critical intellectual property and modem technology that our team has created,” said Bob Swan, the CEO of Intel. 

As a result of the acquisition, Apple will hold over 17,000 wireless technology patents, ranging from cellular standards to modem architecture and modem operation, while Intel will develop modems for non-smartphone applications.

Redgate acquires Flyway for cross-platform database migration
Database development solutions provider Redgate announced that it is investing $10 million in the acquisition and development of cross-platform open-source database migration tool, Flyway. 

“We’ve spent the last five years developing a portfolio of SQL Server tools that enable developers to include the database in DevOps, and we want to give those same advantages to every developer on any platform. With Flyway, we’ve just taken a huge leap forward in that direction,” said Simon Galbraith, the CEO and co-founder of Redgate. 

Flyway supports databases such as Oracle to MySQL, PostgreSQL to Amazon Redshift and looks to extend to new database platforms. 

The acquisition of the tool will also provide resources to furthering tSQLt, the database unit testing framework, SQL Cover, the code coverage tool for T-SQL, and tSQLt Test Adapter, the Visual Studio tool for discovering and executing tSQLt tests.

The post SD Times news digest: Intel’s Reinforcement Learning Coach 1.0, Amazon’s educational tools for ML, and Apple acquires Intel’s smartphone modem business appeared first on SD Times.

Read more: sdtimes.com

Categories
Technology Videos

AMAZING MANUFACTURING THAT IS ON ANOTHER LEVEL


For copyright matters please contact us at: copymanager.mn@gmail.com

BRAIN TIME ► https://goo.gl/tTWgH2

Read more: youtube.com