Software Engineering Daily

Software Engineering Daily
By Software Engineering Daily
About this podcast
Technical interviews about software topics.
In this podcast

Podcasts like "Software Engineering Daily"

By Carl Franklin and Richard Campbell
By Adam Wathan
By Laurence Bradford
By thoughtbot
By Shawn Smash-Jett / Anchor
By Matt Stratton, Trevor Hess, and Bridget Kromhout
By Patrick Wheeler and Jason Gauci
By DevChat.tv
Latest episodes
today
Streaming architecture defines how large volumes of data make their way through an organization. Data is created at a user’s smartphone, or on a sensor inside of a conveyor belt at a factory. That data is sent to a set of backend services that aggregate the data, organizing it and making it available to business analysts, application developers, and machine learning algorithms. The velocity at which data is created has led to widespread use of the “stream” abstraction–a never ending, append-only array of data. To deal with this volume, streams need to be buffered, batched, cached, mapreduced, machine learned, and munged until they are in a state where they can provide value to the end user. There are numerous ways that data can travel this path, and in today’s episode we discuss the streaming systems, data lakes, and data warehouses that can be used to build an architecture that makes use of streaming data. Ted Dunning is a chief application architect at MapR, and he joins the show to discuss the patterns that engineering teams are using to build modern streaming architectures. Full disclosure: MapR is a sponsor of Software Engineering Daily. Meetups for Software Engineering Daily are being planned! Go to softwareengineeringdaily.com/meetup if you want to register for an upcoming Meetup. In March, I’ll be visiting Datadog in New York and Hubspot in Boston, and in April I’ll be at Telesign in LA. Summer internship applications to Software Engineering Daily are also being accepted. If you are interested in working with us on the Software Engineering Daily open source project full-time this Summer, send an application to [email protected] We’d love to hear from you. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors There’s a new open source project called Dremio that is designed to simplify analytics. It’s also designed to handle some of the hard work, like scaling performance of analytical jobs. Dremio is the team behind Apache Arrow, a new standard for in-memory columnar data analytics. Arrow has been adopted across dozens of projects – like Pandas – to improve the performance of analytical workloads on CPUs and GPUs. It’s free and open source, designed for everyone, from your laptop, to clusters of over 1,000 nodes. At dremio.com/sedaily you can find all the necessary resources to get started with Dremio for free. If you like it, be sure to tweet @dremiohq and let them know you heard about it from Software Engineering Daily. Thanks again to Dremio, and check out dremio.com/sedaily to learn more. Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/acs.
Feb. 16, 2018
When you go to a website where a video is playing, and your video lags, how does the website know that you are having a bad experience? Problems with video are often not complete failures–maybe part of the video loads, and plays just fine, and then the rest of the video is buffering. You have probably experienced sitting in front of a video, waiting for it to load as the loading wheel mysteriously spins. Since problems with video are often not complete failures, troubleshooting a problem with a user’s video playback is not as straightforward as just logging whenever a crash occurs. You need to continuously monitor the video playback on every client device and aggregate it in a centralized system for analysis. The centralized logging system will allow you to separate problems with a specific user from problems with the video service itself. A single user could have bad wifi, or have 50 tabs open with different videos. To identify problems that are caused by the video player rather than the user, you need to capture the playback from every video and every user. Scott Kidder works at Mux, where he builds a streaming analytics system for video monitoring. In this episode, Scott explains how events make it from a video player onto the backend analytics system running on Kinesis and Apache Flink. Events from the browser are constantly added to Kinesis (which is much like Kafka). Apache Flink reads those events off of Kinesis and map reduces them to discover anomalies. For example, if 100 users watch a 20 minute cat video, and the video stops playing at minute 12 for all 100 users, there is probably some data corruption in that video. You would only be able to discover that by assessing all users. Scott and I discussed the streaming infrastructure that he works on at Mux, as well as other streaming systems like Spark, Apache Beam, and Kafka. This episode is the first in a short series about streaming data infrastructure. I wanted to do some shows in preparation for Strata Data conference in March in San Jose, which I will be attending thanks to a complimentary ticket from O’Reilly. O’Reilly has been kind enough to give me free tickets since Software Engineering Daily started and did not have the money to attend any conferences. If you want to attend Strata, you can use promo code PCSED to get 20% off. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors There’s a new open source project called Dremio that is designed to simplify analytics. It’s also designed to handle some of the hard work, like scaling performance of analytical jobs. Dremio is the team behind Apache Arrow, a new standard for in-memory columnar data analytics. Arrow has been adopted across dozens of projects – like Pandas – to improve the performance of analytical workloads on CPUs and GPUs. It’s free and open source, designed for everyone, from your laptop, to clusters of over 1,000 nodes. At dremio.com/sedaily you can find all the necessary resources to get started with Dremio for free. If you like it, be sure to tweet @dremiohq and let them know you heard about it from Software Engineering Daily. Thanks again to Dremio, and check out dremio.com/sedaily to learn more. A thank you to our sponsor, Datadog, a cloud monitoring platform bringing full visibility to dynamic infrastructure and applications. Create beautiful dashboards, set powerful, machine learning–based alerts, and collaborate with your team to resolve performance issues. Datadog integrates seamlessly with more than 200 technologies, including Google Cloud Platform, AWS, Docker, PagerDuty, and Slack. With fast installation and setup, plus APIs and open source libraries for custom instrumentation, Datadog makes it easy for teams to monitor every layer of their stack in one place. But don’t take our word for it—start a free trial today & Datadog will send you a free T-shirt! Visit softwareengineeringdaily.com/datadog to get started.   Amazon Redshift powers the analytics of your business–and Intermix.io powers the analytics of your Redshift. Intermix.io gives you the tools you need to analyze your Amazon Redshift performance and improve the toolchain of everyone downstream from your data warehouse. The team at Intermix has seen so many Redshift clusters, they are confident they can solve whatever performance issues you are having. Go to intermix.io/sedaily to get a free 30-day trial. Intermix collects all your Redshift logs and makes it easy to figure out what’s wrong so you can take action. All in a nice, intuitive dashboard. Go to intermix.io/sedaily to start your free 30-day trial.  
Feb. 15, 2018
At a big enough scale, every software product produces lots of data. Whether you are building an advertising technology company, a social network, or a system for IoT devices, you have thousands of events coming in at a fast pace that you want to aggregate, study and act upon. For the last decade, engineers have been learning to store and process these vast quantities of data. The first common technique was to store all your data to HDFS–the Hadoop Distributed File System–and run nightly Hadoop MapReduce jobs across that data. HDFS is cheap (stored on disk), effective (Hadoop had a revolutionary effect on business analysis), and easy to understand (“every night we take all the data from the previous day, analyze it with Hadoop, and send an email report to our analysts”). The second common technique was the “Lambda Architecture.” The Lambda Architecture used a stream processing system like Apache Storm to process all incoming events as soon as they were created, so that software products could react quickly to the changes occurring in a large scale system. But events would sometimes be processed out of order, or they would get lost due to node failures. To fix those errors, the nightly Hadoop MapReduce jobs would still run, and would reconcile all the problems that might have occurred when the events were processed in the streaming system. The Lambda Architecture worked pretty well–systems were becoming “real time”, and products like Twitter were starting to feel alive as they were able to rapidly process the massive volume of events on the fly. But managing a system with a Lambda Architecture was painful–you had to manage both a Hadoop cluster and a Storm cluster. You had to make sure that your Hadoop processing did not interfere with your Storm processing. Today, a newer technique for ingesting and reacting to data has become more common, and is referred to as “streaming analytics.” Streaming analytics is a strategy for performing fast analysis of data coming into a system. In streaming analytics systems, events are sent to a scalable, durable pubsub system such as Kafka. You can think of Kafka as a huge array of events that have occurred–such as users liking tweets or clicking on ads. Stream processing systems like Apache Flink or Apache Spark read the data from Kafka as if they were reading an array that was being continually appended to. The sequence of events that get written to Kafka are called “streams”. This can be confusing–with a stream, you imagine this constantly moving, transient sequence of data. That’s partially true, but data will stay in Kafka as long as you want it to. You can set a retention policy for 2 weeks, 2 months, or 2 years. As long as that data is still retained in Kafka, your stream processing system can start reading from any place in the stream. The stream processing systems like Flink or Spark that read from Kafka are still grabbing batches of data, and processing them in batches. They are reading from the event stream buffer in Kafka, which you can think of as an array. (This is something that confused me for a long time, so if you are still confused, don’t worry, we explain it more in this episode.) Tugdual Grall is an engineer with MapR. In today’s episode, we explore use cases and architectural patterns for streaming analytics. Full disclosure: MapR is a sponsor of Software Engineering Daily. In past shows, we have covered data engineering in detail–we’ve looked at Uber’s streaming architecture, talked to Matei Zaharia about the basics of Apache Spark, and explored the history of Hadoop. To find all of our episodes about data engineering , download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors There’s a new open source project called Dremio that is designed to simplify analytics. It’s also designed to handle some of the hard work, like scaling performance of analytical jobs. Dremio is the team behind Apache Arrow, a new standard for in-memory columnar data analytics. Arrow has been adopted across dozens of projects – like Pandas – to improve the performance of analytical workloads on CPUs and GPUs. It’s free and open source, designed for everyone, from your laptop, to clusters of over 1,000 nodes. At dremio.com/sedaily you can find all the necessary resources to get started with Dremio for free. If you like it, be sure to tweet @dremiohq and let them know you heard about it from Software Engineering Daily. Thanks again to Dremio, and check out dremio.com/sedaily to learn more. Today’s podcast is sponsored by Datadog, a cloud-scale monitoring platform for infrastructure and applications. In Datadog’s new container orchestration report, Kubernetes holds a 41-percent share of Docker environments, a number that’s rising fast. As more companies adopt containers, and turn to Kubernetes to manage their containers, they need a comprehensive monitoring platform that’s built for dynamic, modern infrastructure. Datadog integrates seamlessly with more than 200 technologies, including Kubernetes and Docker, so you can monitor your entire container infrastructure in one place. And with Datadog’s new Live Container view, you can see every container’s health, resource consumption, and running processes in real time. See for yourself by starting a free trial and get a free Datadog T-shirt! softwareengineeringdaily.com/datadog Amazon Redshift powers the analytics of your business–and Intermix.io powers the analytics of your Redshift. Intermix.io gives you the tools you need to analyze your Amazon Redshift performance and improve the toolchain of everyone downstream from your data warehouse. The team at Intermix has seen so many Redshift clusters, they are confident they can solve whatever performance issues you are having. Go to intermix.io/sedaily to get a free 30-day trial. Intermix collects all your Redshift logs and makes it easy to figure out what’s wrong so you can take action. All in a nice, intuitive dashboard. Go to intermix.io/sedaily to start your free 30-day trial.  
Feb. 14, 2018
Pinterest is a visual feed of ideas, products, clothing, and recipes. Millions of users browse Pinterest to find images and text that are tailored to their interests. Like most companies, Pinterest started with a large monolithic application that served all requests. As Pinterest’s engineering resources expanded, some of the architecture was broken up into microservices and Dockerized, which make the system easier to reason about. To serve users with better feeds, Pinterest built a machine learning pipeline using Kafka, Spark, and Presto. User events are generated from the frontend, logged onto Kafka, and aggregated to build machine learning models. These models are deployed into Docker containers much like the production microservices. Kinnary Jangla is a senior software engineer at Pinterest, and she joins the show to talk about her experiences at the company–breaking up the monolith, architecting a machine learning pipeline, and deploying those models into production. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors Sumo Logic is a cloud-native, machine data analytics service that helps you Run and Secure your Modern Application. If you are feeling the pain of managing your own log, event, and performance metrics data, check out sumologic.com/sedaily. Even if you have tools already, it’s worth checking out Sumo Logic and seeing if you can leverage your data even more effectively, with real-time dashboards and monitoring, and improved observability – to improve the uptime of your application and keep your day-to-day runtime more secure. Check out sumologic.com/sedaily for a free 30-day Trial of Sumo Logic, to find out how Sumo Logic can improve your productivity and your application observability–wherever you run your applications. That’s sumologic.com/sedaily. There’s a new open source project called Dremio that is designed to simplify analytics. It’s also designed to handle some of the hard work, like scaling performance of analytical jobs. Dremio is the team behind Apache Arrow, a new standard for in-memory columnar data analytics. Arrow has been adopted across dozens of projects – like Pandas – to improve the performance of analytical workloads on CPUs and GPUs. It’s free and open source, designed for everyone, from your laptop, to clusters of over 1,000 nodes. At dremio.com/sedaily you can find all the necessary resources to get started with Dremio for free. If you like it, be sure to tweet @dremiohq and let them know you heard about it from Software Engineering Daily. Thanks again to Dremio, and check out dremio.com/sedaily to learn more. Amazon Redshift powers the analytics of your business–and Intermix.io powers the analytics of your Redshift. Intermix.io gives you the tools you need to analyze your Amazon Redshift performance and improve the toolchain of everyone downstream from your data warehouse. The team at Intermix has seen so many Redshift clusters, they are confident they can solve whatever performance issues you are having. Go to intermix.io/sedaily to get a free 30-day trial. Intermix collects all your Redshift logs and makes it easy to figure out what’s wrong so you can take action. All in a nice, intuitive dashboard. Go to intermix.io/sedaily to start your free 30-day trial.  
Feb. 13, 2018
Over 12 years of engineering, Box has developed a complex architecture of services. Whenever a user uploads a file to Box, that upload might cause 5 or 6 different services to react to the event. Each of these services is managed by a set of servers, and managing all of these different servers is a challenge. Sam Ghods is the cofounder and services architect of Box. In 2014, Sam was surveying the landscape of different resource managers, deciding which tool should be the underlying scheduler for deploying services at Box. He chose Kubernetes because it was based on Google’s internal Borg scheduling system. For years, engineering teams at companies like Facebook and Twitter had built internal scheduling systems modeled after Borg. When Kubernetes arrived, it provided an out-of-the-box tool for managing infrastructure like Google would. In today’s episode, Sam describes how Box began its migration to Kubernetes, and what the company has learned along the way. It’s a great case study for people who are looking at migrating their own systems to Kubernetes. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/acs. Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily. Your company needs to build a new app, but you don’t have the spare engineering resources. There are some technical people in your company who have time to build apps–but they are not engineers. OutSystems is a platform for building low-code apps. As an enterprise grows, it needs more and more apps to support different types of customers and internal employee use cases. OutSystems has everything that you need to build, release, and update your apps without needing an expert engineer. And if you are an engineer, you will be massively productive with OutSystems. Find out how to get started with low-code apps today–at OutSystems.com/sedaily. There are videos showing how to use the OutSystems development platform, and testimonials from enterprises like FICO, Mercedes Benz, and SafeWay. OutSystems enables you to quickly build web and mobile applications–whether you are an engineer or not. Check out how to build low-code apps by going to OutSystems.com/sedaily.  
Feb. 12, 2018
When Box started in 2006, the small engineering team had a lot to learn. Box was one of the earliest cloud storage companies, with a product that allowed companies to securely upload files to remote storage. This was two years before Amazon Web Services introduced on-demand infrastructure, so the Box team managed their own servers, which they learned how to do as they went along. In the early days, the backup strategy was not so sophisticated. The founders did not know how to properly set up hardware in a colocated data center. The frontend interface was not the most beautiful product. But the product was so useful that eventually it started to catch on. Box’s distributed file system became the backbone of many enterprises. Employees began to use it to interact with and share data across organizations. The increase in usage raised the stakes for Box’s small engineering team. If Box’s service went down, it could cripple an enterprise’s productivity, which meant that Box needed to hire experienced engineers to build resilient systems with higher availability. And to accommodate the growth in usage, Box needed to predict how much hardware to purchase, and how much space in a data center to rent–a process known as capacity planning. As Box went from 3 engineers to 300, the different areas of the company went from being managed by individuals, to teams, to entire departments with VPs and C-level executives. Jeff Quiesser is an SVP at Box, and one of the earliest employees. He joins the show today to describe how Box changed as the company scaled. We covered engineering, management, operations, and culture. In previous shows, we have explored the stories of companies like Slack, Digital Ocean, Giphy, Uber, Tinder, and Spotify. It’s always fun to hear how a company works–from engineering the first product to enterprises with millions of users. To find all of our episodes about how companies are built, download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors Your company needs to build a new app, but you don’t have the spare engineering resources. There are some technical people in your company who have time to build apps–but they are not engineers. OutSystems is a platform for building low-code apps. As an enterprise grows, it needs more and more apps to support different types of customers and internal employee use cases. OutSystems has everything that you need to build, release, and update your apps without needing an expert engineer. And if you are an engineer, you will be massively productive with OutSystems. Find out how to get started with low-code apps today–at OutSystems.com/sedaily. There are videos showing how to use the OutSystems development platform, and testimonials from enterprises like FICO, Mercedes Benz, and SafeWay. OutSystems enables you to quickly build web and mobile applications–whether you are an engineer or not. Check out how to build low-code apps by going to OutSystems.com/sedaily. Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily. The octopus: a sea creature known for its intelligence and flexibility. Octopus Deploy: a friendly deployment automation tool for deploying applications like .NET apps, Java apps and more. Ask any developer and they’ll tell you it’s never fun pushing code at 5pm on a Friday then crossing your fingers hoping for the best. That’s where Octopus Deploy comes into the picture. Octopus Deploy is a friendly deployment automation tool, taking over where your build/CI server ends. Use Octopus to promote releases on-prem or to the cloud. Octopus integrates with your existing build pipeline–TFS and VSTS, Bamboo, TeamCity, and Jenkins. It integrates with AWS, Azure, and on-prem environments. Reliably and repeatedly deploy your .NET and Java apps and more. If you can package it, Octopus can deploy it! It’s quick and easy to install. Go to Octopus.com to trial Octopus free for 45 days. That’s Octopus.com    
Feb. 9, 2018
Employees often find themselves needing to do work outside of the office. Depending on the sensitivity of your task, accessing internal systems from a remote location may or may not be OK. If you are using a corporate application that shows the menu of your company’s cafe on your smartphone, your workload is less sensitive. If you are accessing the proprietary codebase of your company’s search engine, your workload is more sensitive. As Google grew in headcount, the different cases of employees logging in from different places grew as well. Google developed a fine-grained, adaptive security model called BeyondCorp to allow for a wide variety of use cases. Whether you are an engineer logging in from a Starbucks or a human resources employee logging in from your desk, the BeyondCorp system uses the same access proxy to determine your permissions. The BeyondCorp architecture is also built around the assumption of a zero-trust network. A zero-trust network is a modern enterprise security architecture where internal servers do not trust each other. Zero-trust networks assume that the network has already been breached. If you are writing an internal application, your default assumption should be to distrust an incoming request from someone else on the network. The zero-trust model is in contrast to an outdated model of enterprise security–that of the hard outer defense of a firewall, that purports to prevent attackers from ever making their way into the vulnerable inside of a network. The firewall model assumes that all of these servers within the firewall can trust each other. Several papers have come out of Google discussing the BeyondCorp security model. These papers describe the network architecture, and the security philosophies of BeyondCorp. Since the release of these papers, an ecosystem of security providers has sprung up to provide implementation services for companies that want BeyondCorp security in their enterprise. Google has also productized its BeyondCorp system with an identity-aware proxy that is tied into their Google Cloud product. Max Saltonstall is the technical director of information technology in the office of the CTO at Google, where he has helped to facilitate the widespread adoption of the BeyondCorp program. In this episode, we talk about enterprise security–from remote employee access to zero-trust networks. We also talk about implementing the BeyondCorp model–why enterprises should consider it, and how to do it. We have done lots of past shows about security–from car hacking to smart contract vulnerabilities to discussions with luminaries like Bruce Schneier and Peter Warren Singer. To find all of our episodes about security, download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors Sumo Logic is a cloud-native, machine data analytics service that helps you Run and Secure your Modern Application. If you are feeling the pain of managing your own log, event, and performance metrics data, check out sumologic.com/sedaily. Even if you have tools already, it’s worth checking out Sumo Logic and seeing if you can leverage your data even more effectively, with real-time dashboards and monitoring, and improved observability – to improve the uptime of your application and keep your day-to-day runtime more secure. Check out sumologic.com/sedaily for a free 30-day Trial of Sumo Logic, to find out how Sumo Logic can improve your productivity and your application observability–wherever you run your applications. That’s sumologic.com/sedaily. Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily. The octopus: a sea creature known for its intelligence and flexibility. Octopus Deploy: a friendly deployment automation tool for deploying applications like .NET apps, Java apps and more. Ask any developer and they’ll tell you it’s never fun pushing code at 5pm on a Friday then crossing your fingers hoping for the best. That’s where Octopus Deploy comes into the picture. Octopus Deploy is a friendly deployment automation tool, taking over where your build/CI server ends. Use Octopus to promote releases on-prem or to the cloud. Octopus integrates with your existing build pipeline–TFS and VSTS, Bamboo, TeamCity, and Jenkins. It integrates with AWS, Azure, and on-prem environments. Reliably and repeatedly deploy your .NET and Java apps and more. If you can package it, Octopus can deploy it! It’s quick and easy to install. Go to Octopus.com to trial Octopus free for 45 days. That’s Octopus.com  
Feb. 8, 2018
Applications need to be ready to scale in response to high-load events. With mobile applications, this can be even more important. People rely on mobile applications such as banking, ride sharing, and GPS. During Black Friday, a popular ecommerce application could be bombarded by user requests–you might not be able to complete a request to buy an item at the Black Friday discount. If you attend the Superbowl, and then try to catch an Uber after leaving, all the other people around you might be summoning a car at the same time, and the system might not scale. In order to prepare infrastructure for high volume, mobile development teams often create end-to-end load tests. After recording incoming mobile traffic, that mobile traffic can be replicated and replayed, to measure a backend’s response to the mobile workload. Paulo Costa and Rodrigo Coutinho are engineers at OutSystems, a company that makes a platform for building low-code mobile applications. In this episode, Paulo and Rodrigo discuss the process of performing end-to-end scalability testing for mobile applications backed by cloud infrastructure.  We talked about the high level process of architecting the load test, and explored the tools used to implement it. Full disclosure: OutSystems is a sponsor of Software Engineering Daily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript.
Feb. 7, 2018
Your friends from college are asking you how to buy Bitcoin. Your mom is emailing you articles about the benefits of decentralized peer-to-peer networks. Your shoe shiner is telling you to buy XRP. It is 2018, and cryptocurrencies have become a daily part of news headlines. The general public may not understand how this technology works, but everyone knows that changes are on the horizon. At some point in the future, our financial and computing systems will be deeply integrated with the cryptoeconomy. We all remember the dot com boom. We know that some people got fantastically rich during that period through speculation. We think–maybe this is our chance to make money. If you read reddit, or almost any news site, you will see stories of obscene wealth intertwine with pseudoscientific discussions of how a new cryptocurrency is going to change the world. What is fact and what is fiction? How far are we from a beautiful future, with frictionless micropayments? Matt Leising is a journalist at Bloomberg who has covered financial markets for 15 years. Today, his reporting has been completely engulfed by cryptocurrencies. There are so many dramatic stories, it’s hard to pick what to focus on. Today, we discuss two topics he has covered recently: Ripple and Tether. Ripple is a company that makes enterprise blockchain solutions for global payments. That sounds like the future, and it is no surprise that people would want to buy into Ripple if possible. Ripple has been around for 7 years, and they have a strong team, and relationships with major financial institutions. One of Ripple’s early projects was a currency called XRP.  The goal of XRP was to make a fast, scalable digital asset that would facilitate currency exchange among banks. We covered Ripple and XRP in previous episodes with David Schwartz and Greg Kidd. XRP remains in circulation, but Ripple the company has shifted development resources away from XRP, and towards RippleNet, which seeks to replace the aging SWIFT code system for banks. Today, XRP is being experimented with by several money transfer companies, but the digital currency is not widely used for anything–well, other than speculation. In the tremendous cryptocoin bull run of early 2018, XRP shot up as sharply as almost any other coin. In an article about Ripple, Matt Leising tried to get to the root explanation for why this occurred. Was it a sudden market recognition of some long term value of XRP? Was it a stampeding herd of people who did not know the state of XRP? Was it a pump and dump? A few days after publishing his article about Ripple, Matt wrote about Tether. Tether purports to be a “stablecoin”–a digital currency which is pegged to the value of something less volatile. Stablecoins are useful in that they can reduce friction of exchange between tokens. Without a stablecoin, you might have to transfer from one cryptocurrency to USD, which probably involves the US banking system. There’s a good discussion of stablecoins in our episode with Vlad Zamfir and Haseeb Qureshi on cryptoeconomics. If you can use Tether instead of USD, you have less transactional friction. Perhaps you can escape the onerous tax consequences of day trading cryptocurrencies. Tether claims to have $1 USD in reserve for every 1 Tether in circulation. So if you wanted to cash out Tether for USD, you should theoretically be able to do that–except that Tether seems to have no connection to any banks. And Tether has severed its ties with auditing agencies that it was working with. There is $2.3B of Tether in circulation. That is a small fraction of the overall trading volume of cryptocurrencies. But it is unknown how much the current crypto bubble is propped up by the functionality of Tether–the ability to seamlessly move between cryptocurrencies without going into USD. As long as the market believes in Tether (and today it is indeed at $.999014) in valuation, this stablecoin mystique will persist, and market friction will continue to be smoothed out by that belief. This was Matt’s second appearance on the show, and it was a blast to have him back on. In his last episode, he discussed the infamous DAO hack, which led to an Ethereum fork. To find that episode as well as links to learn more about the topics described in the show, download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily. The octopus: a sea creature known for its intelligence and flexibility. Octopus Deploy: a friendly deployment automation tool for deploying applications like .NET apps, Java apps and more. Ask any developer and they’ll tell you it’s never fun pushing code at 5pm on a Friday then crossing your fingers hoping for the best. That’s where Octopus Deploy comes into the picture. Octopus Deploy is a friendly deployment automation tool, taking over where your build/CI server ends. Use Octopus to promote releases on-prem or to the cloud. Octopus integrates with your existing build pipeline–TFS and VSTS, Bamboo, TeamCity, and Jenkins. It integrates with AWS, Azure, and on-prem environments. Reliably and repeatedly deploy your .NET and Java apps and more. If you can package it, Octopus can deploy it! It’s quick and easy to install. Go to Octopus.com to trial Octopus free for 45 days. That’s Octopus.com Your company needs to build a new app, but you don’t have the spare engineering resources. There are some technical people in your company who have time to build apps–but they are not engineers. OutSystems is a platform for building low-code apps. As an enterprise grows, it needs more and more apps to support different types of customers and internal employee use cases. OutSystems has everything that you need to build, release, and update your apps without needing an expert engineer. And if you are an engineer, you will be massively productive with OutSystems. Find out how to get started with low-code apps today–at OutSystems.com/sedaily. There are videos showing how to use the OutSystems development platform, and testimonials from enterprises like FICO, Mercedes Benz, and SafeWay. OutSystems enables you to quickly build web and mobile applications–whether you are an engineer or not. Check out how to build low-code apps by going to OutSystems.com/sedaily.  
Feb. 6, 2018
Over the last decade, computation and storage has moved from on-premise hardware into the cloud data center. Instead of having large servers “on premise,” companies started to outsource their server workloads to cloud service providers. At the same time, there has been a proliferation of devices at the “edge.” The most common edge device is your smartphone, but there are many other smart devices that are growing in number–drones, smart cars, Nest thermostats, smart refrigerators, IoT sensors, and next generation centrifuges. Each of these devices contains computational hardware. Another class of edge device is the edge server. Edge servers are used to facilitate faster response times than your core application. For example, Software Engineering Daily uses a content delivery network for audio files. These audio files are distributed throughout the world on edge servers. The core application logic of Software Engineering Daily runs on a WordPress site, and that WordPress application is distributed to far fewer servers than our audio files. “Cloud computing” and “edge computing” both refer to computers that can serve requests. The “edge” is commonly used to refer to devices that are closer to the user–so they will deliver faster responses. The “cloud” refers to big, bulky servers that can do heavy duty processing workloads–such as training machine learning models, or issuing a large distributed MapReduce query. As the volume of computation and data increases, we look for better ways to utilize our resources, and we are realizing that the devices at the edge are underutilized. In today’s episode, Kenton Varda explains how and why to deploy application logic to the edge. He works at Cloudflare on a project called Cloudflare Workers, which are a way to deploy JavaScript to edge servers, such as the hundreds of data centers around the world that are used by Cloudflare for caching. Kenton was previously on the show to discuss protocol buffers, a project he led while he was at Google. To find that episode, and many other episodes about serverless, download the Software Engineering Daily app for iOS or Android. These apps have all 650 of our episodes in a searchable format–we have recommendations, categories, related links and discussions around the episodes. It’s all free and also open source–if you are interested in getting involved in our open source community, we have lots of people working on the project and we do our best to be friendly and inviting to new people coming in looking for their first open source project. You can find that project at Github.com/softwareengineeringdaily. Transcript Transcript provided by We Edit Podcasts. Software Engineering Daily listeners can go to weeditpodcasts.com/sed to get 20% off the first two months of audio editing and transcription services. Thanks to We Edit Podcasts for partnering with SE Daily. Please click here to view this show’s transcript. Sponsors Digital Ocean is a reliable, easy-to-use cloud provider. More and more people are finding out about Digital Ocean, and realizing that Digital Ocean is perfect for their application workloads. This year, Digital Ocean is making that even easier, with new node types–a $15 flexible droplet that can mix and match different configurations of CPU and RAM, to get the perfect amount of resources for your application. There are also CPU optimized droplets–perfect for highly active frontend servers, or CI/CD workloads. And running on the cloud can get expensive, which is why Digital Ocean makes it easy to choose the right size instance. And the prices on standard instances have gone down too–you can check out all their new deals by going to do.co/sedaily. And as a bonus to our listeners you will get $100 in credit over 60 days. Use the credit for hosting or infrastructure–that includes load balancers, object storage, and computation. Get your free $100 credit at do.co/sedaily. Thanks to Digital Ocean for being a sponsor of Software Engineering Daily. Azure Container Service simplifies the deployment, management and operations of Kubernetes. Eliminate the complicated planning and deployment of fully orchestrated containerized applications with Kubernetes. You can quickly provision clusters to be up and running in no time, while simplifying your monitoring and cluster management through auto upgrades and a built-in operations console. Avoid being locked into any one vendor or resource. You can continue to work with the tools you already know, such as Helm, and move applications to any Kubernetes deployment. Integrate with your choice of container registry, including Azure Container Registry. Also, quickly and efficiently scale to maximize your resource utilization without having to take your applications offline. Isolate your application from infrastructure failures and transparently scale the underlying infrastructure to meet growing demands—all while increasing the security, reliability, and availability of critical business workloads with Azure. Check out the Azure Container Service at aka.ms/acs. Simplify continuous delivery with GoCD, the on-premise, open source, continuous delivery tool by ThoughtWorks. With GoCD, you can easily model complex deployment workflows using pipelines and visualize them end-to-end with the Value Stream Map. You get complete visibility into and control of your company’s deployments. At gocd.org/sedaily, find out how to bring continuous delivery to your teams. Say goodbye to deployment panic and hello to consistent, predictable deliveries. Visit gocd.org/sedaily to learn more about GoCD. Commercial support and enterprise add-ons, including disaster recovery, are available.