Automotive

Over-the-Air (OTA) software updates

This case study is about a premium car manufacturer from southern Germany that uses company vehicles to test over-the-air (OTA) software updates. This allows the manufacturer to test software updates under realistic conditions before they are rolled out to the end customer.

Problem statement

Extensive testing is necessary for the safe and smooth release of software updates in vehicles, especially if they are released over-the-air (OTA). Since it is in the manufacturer’s interest to reduce the risk of faulty software patches to a minimum, various test cycles are run through before updates are released. For example, different hardware and software versions must be supported over the course of a product lifecycle, which means additional complexity. Consequently, an extensive and reliable testing strategy is necessary to meet the manufacturer’s high quality requirements.

In addition to vehicle simulation, our customer also uses real vehicles to mass test OTA software updates. For this purpose, the customer uses its stock of company vehicles, where a large number of vehicle models are available. Since the vehicles are actively used, they form an ideal target group for testing OTA software updates under realistic conditions before they are rolled out to end customers.

The challenge of this case study was, on the one hand, that the data needed to test software updates for company vehicles was only available in a legacy system, and accessing it involved a great deal of effort. On the other hand, the manufacturer is planning for a significant growth in the number of vehicle models that will have to be supplied with OTA updates in the future; the current approach would no longer be economically viable for this.

Our task was to provide the necessary data - taking into account the predicted growth - in an automated and cost-efficient manner via modern interfaces, so that company vehicles can continue to be used as an essential part of the test strategy for OTA updates.

Our solution

As with all our solutions, our solution design was based on the business requirements and the technical circumstances. In this case study, the initial situation was that users accessed the company vehicle data predictably and at intervals. Only in exceptional cases where, for example, the owner of a vehicle had to be determined, would ad hoc requests be made to the interfaces.

Due to the above premises and requirements that the solution should be scalable and operated with minimal costs, we implemented the following solutions:

Architecture

For the architecture, we opted for a classic microservice architecture. By decoupling them from the legacy system, the microservices could be scaled independently and optimized for the target infrastructure.

Infrastructure

Our customer’s cloud platform team specified the basic cloud infrastructure: AWS was used as the cloud platform and Kubernetes (EKS) as the container orchestration platform. It would also have been conceivable to do it without an EKS and rely on a serverless infrastructure (e.g., AWS Fargate). This would be recommended if the business value did not justify the cost of an EKS cluster.

For EC2 instances, we chose Amazon’s T3 general purpose nodes (see EC2 instances). These have the advantage of being very cost-effective. Furthermore, they offer the possibility to save CPU resources during low load periods, which can be reused under high load. In conjunction with the AWS Cluster Autoscaler and the Kubernetes Horizontal Pod Autoscaler, they are ideal for our scalable service design.

Data

In order to make the company vehicle data from the legacy system available in our target infrastructure (AWS) in a cost-efficient manner, we opted for a classic ETL approach. Here, the data is managed in the target infrastructure in an Amazon Aurora Serverless V2. The advantage of the serverless design is that the capacities of the database service are automatically aligned with the load profile of the application. That is, the database’s resources increase when the load increases and decrease during low-load periods. Thus, the available resources are used optimally at all times and no unnecessary costs are incurred.

Service

The microservices were developed using the cloud-native framework Quarkus, which allowed us to implement highly efficient cloud-native services very quickly using Enterprise Java (Eclipse Microprofile). In particular, the fast testing cycles helped us to develop the high-value services in a very short time span.

In addition, we took advantage of the ability to compile the applications written in Java “natively” (see GraalVM). The resulting x86 binaries have significantly lower resource consumption (e.g., RSS memory) compared to the JVM artifacts, which has a positive cost impact in the cloud. In addition, the fast startup times of the native applications helped us to realize a scalable application landscape with an adequate response time - despite cost-optimized hardware (see Infrastructure).

Delivery model

We chose an agile process model for implementing the requirements. This enabled us to realize both the solution design and the solution itself within two sprints (two weeks each). This meant that we only needed four weeks from commissioning to deployment of the solution (GO Live).

Outcome

Our cloud expertise enabled us to implement a high-quality cloud-native solution within four weeks that both met the business requirements and took into account the characteristics of the cloud. The result is a service that scales from the infrastructure (database and EC2 instances) to the application code (pods) at minimal cost.

Our customer is now well prepared for the projected growth and can continue to use their corporate vehicles as an essential part of their testing strategy for OTA updates.

Techspace - Analyze. Migrate. Scale. 🚀


Image

Let’s work together.

Would you also like to build a success story with us? Then do not hesitate to contact us. We will be very happy to support you.

Contact Us