1. Speedy Prototyping and Innovation
The modern definition of success is the ability to quickly adapt to ever-changing environments and make the right decisions based on this. It may mean integrating an existing technology or product into current systems or even daring to implement an innovative idea and bring it to market. Or it can mean the exact opposite.
The sooner we try and test, the sooner we have the feeling if an approach is the right move in the first place. It’s why we rely on hackathons and code sprints! They are the epitome of speedy prototyping and actively promote innovation. Through hackathons and code sprints, we have the chance to create a proof of concept and develop prototypes in just a few days. It’s one of the fastest – and most enjoyable – ways to find innovative solutions and ideas.
2. Micro Frontend, Microservices and Containerisation
Whether you want to send e-mails, work on creative boards or use social media, we see these browser uses shifting more and more to web-based applications. This shift towards multifunctional software systems requires a special approach to the information architecture to ensure reliability, scalability and growth potential.
Approaching these applications as “jigsaw puzzle” created with small, independent pieces has proven particularly effective and efficient in their development. With this approach, each complete function of an application is seen and extracted as a separate service that communicates with other similar services via standardised protocols. Similarly, the system’s user interface (UI) follows this same pattern and treats the frontend components like widgets on a dashboard. By applying this “puzzle” approach to the UI, it makes it possible to develop functions independently and in parallel without putting the entire system at risk of failure. This freedom of application development comes from setting up a server with Docker containers and Kubernetes clusters technology while also creating a frontend using web components.
And the best thing about this individual approach to each component? It not only produces better quality software and improved reliability, it also allows resources to be efficiently managed and ensures the overall growth of the whole system is effective and consistent.
3. Multi-platform Apps
Native apps are great! They perfectly integrate into their respective platforms, offer the best performance and a consistent user interface (UI), and use all the features available. The catch is, if you want to offer the app on multiple platforms, a separate native app must be created and implemented for each.
At the same time, there are many tools and providers following the “write once, run everywhere” approach, promising to serve multiple platforms with one code, which are becoming well established solutions on the market.
Beware! Not all tools deliver what they promise! In recent years, our observation of the market has identified two key tools – Flutter and Kotlin Multiplatform – for multi-platform development. We not only use these tools for ourselves but also recommend them to our customers.
Georg Dresler, Senior Software Developer Mobile, Ray SonoIf you’re implementing apps that have a strong UI focus, we recommend Flutter. Its framework is optimised to deliver a consistent and well-performing UI as you build the app. Unlike other tools, Flutter generates native code for each platform your app should exist on, meaning there are no performance penalties. What’s more, thanks to Flutter’s “Hot Reload” feature, developers can see UI changes in real time while they’re coding, making it highly productive tool for prototyping.
Our next favourite tool is Kotlin Multiplatform (KMP). It’s ideal for apps that need to map complex business processes and logics. KMP’s framework implements data models, logic and services (such as persistence and network layer) once, tests them and then makes them available as a native library. This allows business processes to be shared with all platforms – even the backend. KMP also deploys the UI natively so that every platform feature can be used without the need to compromise.
4. Headless CMS
A headless CMS is a purely backend content management system (CMS) designed to make content accessible to third-party systems via relevant interfaces. These tools have the name “Headless CMS” because they separate backend and frontend. This means the system’s focus is the administration and delivery of structured content. The classic or traditional CMSs many of us are familiar with came from an era when editors managed content and created websites without HTML knowledge. Today, however, we expect editors to not only manage content for a website but for multiple applications as well.
A traditional CMS renders and delivers the HTML of a website from the backend – the server side. Now, as we shift to a central content control to allow for usage across multiple targets, sever-side rendering is no longer sufficient – not least because many systems are unable to handle HTML code. As a result, interfaces are provided in a headless CMS so that the content can be sourced in a lightweight, easy-to-read and structured format (most commonly JSON). The move towards eliminating server-side rendering also has a significant positive impact on page performance – meaning better load times and an improved user experience.
Using headless CMS also brings advantages to the design. By separating backend and frontend, it is much easier to replace the frontend without the need to adjust anything in the backend or general content.
5. Static Site Generator
In the past, there were very few use cases for static websites. Those that did exist were mostly microsites or landing pages. But thanks to new possibilities in web development, static sites are now experiencing a renaissance.
When it comes to creating static sites, our favourite tool is the Static Site Generator. This tool generates a static website as part of the build process. It works by accessing various data sources – such as API services or a headless CMS – and uses this information to create individual HTML files based on pre-defined templates and fill each individual page with content. Automated pipelines allow the static site to be played back to the server, where it can then be viewed by the user with any changes updated.
And since these generated static sites are essentially ready-made HTML, CSS and JavaScript files that can be processed directly by the browser, no server-side code (such as resource-heavy database queries) needs to be executed when a user calls up the site.
Static systems and pages bring some big advantages. They offer better performance, are more secure and more economical for our servers. And thanks to the Static Side Generator, most of the work doesn’t take place when a visitor calls up a web page, but only when the content changes.
Christian Wölk, Senior Software Developer Frontend, Ray Sono6. Infrastructure as Code
The overall success of a system operation depends on the flexibility and robustness of the infrastructure. In the past, this was a predominantly manual process controlled by a team of system administrators who set up servers in data centres. With the development of cloud hosting systems this process changed to manually configuring thousands of cloud services to create the necessary framework. All this led to the creation of a solution called “Infrastructure as Code”.
“Infrastructure as Code” enables the management of infrastructure – for example networks, virtual machines, load balancers and connection topologies – in a descriptive model. This is done using a versioning system similar to that used for source codes. This infrastructure management gives a precise description of the infrastructure which gives owners a greater confidence in the system and makes scaling and disaster recovery merely a question of locating the right definition.
Almost every cloud provider has now developed their own solution (for example CloudFormation for AWS, Azure Resource Manager for Microsoft Azure), however, there is one solution that has, over the years, established itself as universal industry standard and that solution is Terraform.
7. Data Science and Data Engineering
There is a digital revolution happening all around us right now in the form of growing numbers of smart devices and a continued shift to digital services. And in this revolution, we are facing yet another turning point: Artificial Intelligence and machine assistance. These two things are already essential parts of our day-to-day lives, and we can only expect them to become even more integrated into different areas as the digitalisation process continues. The result is an increasing emphasis on the importance of data in our daily lives. And we already see it now. More and more devices, cars and machines are equipped with sensors that generate, gather and provide large amounts of data about how we use and interact with them – think weekly screen time notifications from your phone. But it goes beyond that, websites and software are more intelligent, advertising is more targeted, and digital products are becoming more personalised.
There are two distinct areas at play in this data development era: Data engineering and Data science.
Data engineering focuses on the preparation of large amounts of data including the creation of data warehouses, ensuring data integrity, optimising data pipelines, and implementing models or data filters.
While data engineering essentially gathers the data, data science is what brings the data to life and reveals its true value. With the right toolset for data analysis and modelling, it is possible to gain real business insights.
A data engineer gathers and extracts data for analysis, a data analyst finds insights, and then there is a data scientist, who makes projections and predictions for the future based on data gathered and the insights that have been deduced.
Typically, data science is a cyclical process that starts with examining and pre-processing data to gain a general understanding. From there, algorithms and models can then be planned and tested, and metrics can be defined to ultimately optimise models for a specific use case.
What is vital in these data processes is to remember to compare the problem with the results the data analysis produces. Python, R, and many other languages now offer a variety of frameworks and models for analysing statistics, machine learning or deep learning. Frameworks like TensorFlow, scikit, Keras or PyTorch can be quickly set up or adapted to certain problems in order to find correlations in the data or implement machine-owned tasks, such as image or speech recognition.