The post High level IT consulting services for a payment solutions company appeared first on SHALB.
]]>Corefy is a unified payment orchestration platform for online businesses and payment institutions. Integration with Corefy enables its customers to build into their websites and apps the feature-rich functionality of making and receiving online payments. The company has partnered with 200+ PSPs and offers support for any currency, payment method, and flow.
Corefy is an international company that has several global locations, including the HQ in London, the APAC office in the Philippines, and local offices in the Netherlands and Israel. The company’s R&D office is located in Kyiv.
Around eight years ago, before the emergence of the Corefy platform, its co-founders were engaged in developing another payment solution, Interkassa. Back then, SHALB specialists helped them to adjust their existing infrastructure and design a new one on AWS, which was more suitable for their product needs. Working together had been a positive experience, so when this new project arose, they didn’t hesitate to contact us again.
Before our involvement, Corefy ran their processes on an on-prem, VM-based infrastructure. It was outdated, not scalable, and had many problems. Corefy had been thinking about cloud infrastructure, however, they lacked the expertise in the cloud native field to design a new one with their own resources. Dmytro Dzubenko, CTO at Corefy, says:
SHALB provided us with consulting on how to build our new infrastructure using new technologies to cover our needs. Initially, they took our vision and gave us excellent recommendations for databases, and we decided to go with Kubernetes.
So our task lay in providing consulting services to help Corefy migrate their infrastructure to a new platform, and ultimately guide them on how to rebuild it on Kubernetes.
Our main efforts revolved around the dev environment, which was set up as a reference for Corefy’s own developing endeavors.
Following the IaC approach, our specialists described with Terraform the creation and provisioning of infrastructure resources – a managed Kubernetes cluster on AWS to run Corefy’s services. Subsequently, this code has been used by the Corefy team to launch staging and production environments.
In order to enable deploying to the Kubernetes cluster from GitLab, we set up GitLab pipelines and employed Argo CD to automate the deployments. Kubernetes manifests were specified as Helm charts. Our team advised Corefy’s engineers on how to write the charts and deploy applications to Kubernetes.
Release changes are committed to the infrastructure repository, where they are then taken by Argo CD controller and applied to destination environments. The changes are automatically rolled up to dev and staging, except for the production rollouts that require manual approval.
Autoscaling of Corefy’s applications is enabled by KEDA – a Kubernetes-based event driven autoscaler with Amazon SQS as an event source. The customer’s processing operations are defined as events. If the number of events is rising, this triggers upscaling and an increase in the number of worker nodes.
Сustomer’s services in the cluster communicate over a network secured by Linkerd, which is integrated with Argo Rollouts to enhance its traffic shaping abilities. The configuration made it possible to gradually shift traffic during a release, enabling shifting of customer queries to the new version in percent.
In order to meet SLA commitments, Corefy needed to divide traffic on the platform between its customers: multi-tenant that share a payment application, and dedicated or white label, which have a personal payment system under their brand. The latter have better SLA offering, and their data is being stored and processed separately.
The solution has been realized on an infrastructure layer by implementing separated load balancers for multi-tenant and dedicated customers.
Thanks to our collaboration, Corefy acquired a cutting-edge infrastructure that is scalable, cloud native, and based on open-source technologies. According to Dmytro Dzubenko, our involvement resulted in a significant reduction of bugs in their system.
Corefy has improved its release process and made it more reliable through the implementation of GitOps, together with Argo CD. Employing Argo Rollouts expanded the deployment functionality in terms of traffic flow and introduced new update strategies (blue-green and canary deployments). An advanced observability system, based on Prometheus and Grafana, provides for the visualization of infrastructure issues, effective debugging, and log aggregation.
Currently, Corefy’s team is implementing their new infrastructure using the skills and knowledge that SHALB provided them with. The customer praised our vast experience in software consulting, with Dmytro Dzubenko stating that: “SHALB is an expert in building software architecture; they did an excellent job.”
Any questions concerning infrastructure development, maintenance, or support? Get in touch because SHALB engineers love such tasks! Our team is communicative, organized, and works remotely as part of your working community. Book an online meeting or contact sales@shalb.com for more information.
The post High level IT consulting services for a payment solutions company appeared first on SHALB.
]]>The post Multi-cloud solution for retail data analytics company appeared first on SHALB.
]]>Bedrock Analytics, a US company, is one of the key players in the global market of retail analytics, in particular CPG (Consumer Packaged Goods) data analytics. The company’s customers are manufacturers of beverages, food products, detergents and cosmetics that compete for a share of sales in supermarkets.
Bedrock Analytics uses AI algorithms to gather, process, and analyze data that comes from a variety of sources, such as points of sales data, Syndicated Data Aggregators, Retail Store portals, e-commerce databases, consumer panel data sets, etc. In the input Big Data are gigantic arrays of information. In the output – a content-rich analytical extract that would take teams much longer to do in any other way. It is more storytelling than statistics. It is a set of tools that helps product manufacturers find an optimal way to customers.
Thanks to the digital products and services of Bedrock Analytics, both unique craft products can succeed in retail and the largest brands can maintain their market shares. Big Data based and well-designed solutions can give a boost to products that would otherwise struggle to compete in a highly competitive marketplace.
The company is listed in the top twenty global players of CPG analytics and occupies an expert position along with IBM, Microsoft, Manthan Software Services, Oracle, etc. So, when in August 2020 from Oakland, CA came a working inquiry from Navdip Bhachech, Senior Vice President Engineering at Bedrock Analytics, we were excited to work with his team. After all, we had heard a lot about their products! From the very beginning it looked incredible, and then turned into an inspiring DevOps experience that is worth sharing.
The market of CPG data analytics has been continuously growing. It is estimated that during 2021-2028 it will increase by 20%. The systems of Bedrock Analytics will have to process ever larger volumes of data. Their IT infrastructure should be able to handle that scope of operations, as the information provided by Bedrock Analytics is crucial for their customers to make their strategic business decisions on.
The initial scope of work included two main points: containerization and migration to Kubernetes in order to reduce Amazon bills. However, the requirements were complicated by their need for a multi-cloud solution: it was necessary to deploy the technology stack of Bedrock Analytics on two public clouds, AWS and Microsoft Azure, at the same time with the same code base. Their goal was to expand beyond the US and, with larger partners, required Microsoft’s Azure cloud infrastructure.
Among the factors that simplified the task implementation was clarity from the customer’s side – Navdip Bhachech understood the scope of work and deep requirements of Bedrock Analytics, so the communication was smooth. The challenge lay in the uniqueness of the solution.
What Bedrock Analytics wanted was the ability to deploy multi-cloud for the same application, which was quite uncommon. “Doing things multi-cloud means that you have to consider a lot of subtle differences between how AWS and Azure do things, – commented Navdip Bhachech. – If you want it to work the same way you have to tune it differently. And I think that was probably the most challenging part, keeping all of those details straight and organized in the work that was done.’’
The SHALB team has strong Kubernetes expertise and the experience of implementing unconventional solutions. We were chosen for this job because of our technical skills and the ability to understand the space of the project. Commenting on the process of choosing a contractor, Navdip Bhachech admits:
Develop a multi-cloud platform with Kubernetes, Terraform and support of GitOps and Argo CD? Let us handle the task! SHALB engineers love such challenges!
As of 2019, the system of data orchestration at Bedrock Analytics wasn’t optimal to process a large quantity of operations. Since it was based on AWS OpsWorks technology, the orchestration system was vendor-bound and could not be used in other clouds. This also caused inefficiencies and additional expenses. The product complexity and the outcomes of the infrastructure audit revealed the necessity for a new orchestration system that would be based on Kubernetes and cloud-native technologies: Argo CD, Prometheus, etc.
Having decided on optimal workflow, SHALB came to an agreement with the development team in terms of working with repositories: GitFlow and Feature Branching. As a result, we created CI/CD pipelines and chose a monorepo concept for infrastructure development.
Bedrock Analytics is a complex platform that solves hundreds of tasks concurrently. This means that there are approximately as many batch workloads for both analytics and orchestration. In this case it was decided to use an open-source engine Airflow atop of Kubernetes. Serhii Matiushenko, leading DevOps engineer at SHALB, was assigned to implement automation, prepare an infrastructure code, and integrate it with the existing system. Along with other tasks, the process took nearly 3 months to accomplish.
Kubernetes primitives are necessary for the faultless operation and scaling of each workload. Describing the primitives with Helm charts and Kustomize appeared to be quite a time-consuming process.
Since the specifics of Bedrock Analytics imply processing of batch workloads, they needed an orchestrator to run data science jobs. They chose Airflow having considered the experience of the development team and its suitability for python-based Machine Learning environments.
Since there were no public Terraform modules available for Microsoft Azure, we had to develop them from scratch. Unlike Terraform modules for Amazon that are plentiful and have been constantly improved, modules for Microsoft are fewer and of poorer quality.
The team used to have Jenkins, which required considerable effort from the developers to maintain. We chose Bitbucket CI/CD pipelines as an alternative to Jenkins, saving the developers time. With Bitbucket you have your CI/CD pipelines running at the same place where the code resides.
At SHALB, we thoroughly document all stages of infrastructure development to make its further maintenance easier. The case of Bedrock Analytics was no exception.
After the implementation of the project tasks, we delivered the prepared solution to the customer. The delivery process took several stages:
As a result of the project, Bedrock Analytics received a more modern infrastructure, driven and managed by config. As Navdip Bhachech points out, they can now set up and tear down new environments much more easily and through source code. The flexibility of the new platform also provides for configurable scaling and tracking infrastructure changes, which enhances overall observability of the system. Their ability to manage cloud costs has also improved thanks to automation and better configuration of services.
The infrastructure solution designed by SHALB covered the customer’s business processes and had most of them automated. As a result, we managed to optimize the costs of their infrastructure maintenance and accelerate their product TTM due to the faster and simultaneous deployment to different clouds.
Commenting on the project, Navdip Bhachech said:
The case of Bedrock Analytics added another significant and challenging project to the SHALB portfolio. Our team is friendly, proactive, and true masters of their craft. But what is more important, we understand the space of your project, no matter how complex it might be. Modernize your stack and stand out from the crowd with competitive advantages! Book an online meeting or contact sales@shalb.com for more information.
The post Multi-cloud solution for retail data analytics company appeared first on SHALB.
]]>The post Modernizing infrastructure stack for the largest bank in Georgia appeared first on SHALB.
]]>Bank of Georgia (BoG) is one of the leading companies in the banking and financial services sector. It has a wide network of service centers and ATMs throughout Georgia, with representative offices in London, Budapest, and Tel-Aviv.
BoG is one of the largest employers in Georgia that actively supports healthcare, education, the environment, and other important social issues.
Banks can be very traditional in some ways, but even they cannot ignore modern trends. In order to stay competitive, you need to keep pace with the latest technological developments and best practices.
Vazha Pirtskhalaishvili, Head Of DevOps Engineering unit at BoG, commented:
BoG was determined to become a technologically advanced bank. Driven by this realization, they decided to future-proof their systems and make them cloud-ready. To do so, their search began for a contractor that would help them achieve their goals.
BoG initiated the project with a clear vision of what they wanted to achieve. Their criteria for choosing a contractor was equally clear: it had to be a trustworthy high-tech company with local representation in Tbilisi. SHALB was introduced as a potential contractor by HT Solutions, a Georgian IT consulting company and longtime partner of BoG.
That is how SHALB received a task to design a new infrastructure for the largest bank in Georgia. It was a unique and technologically challenging project that immediately ignited our professional interest: we were eager to start.
BoG wanted a technology solution that would allow them the flexibility to manage applications on infrastructure that best suited their needs. According to their values and goals, they opted for microservices architecture, containerization, and Kubernetes.
We were asked to design a solid Kubernetes-orchestrated platform with the possibility to create and manage infrastructures which BoG could migrate their microservices to. One of the key requirements was the system’s fault-tolerance as it was to run business critical applications and have high availability of all architecture components. The solution also had to meet strict security requirements, be integrated with existing security and authorization systems, and reside in BoG’s inhouse data centers.
The first step to take was refactoring the monolith to microservices. BoG teams were responsible for containerizing applications, creating Docker images and running builds. SHALB was assigned with the task of having the services migrated and integrated into Kubernetes and provide their smooth operation. Based on the prepared Docker images, our engineers created Helm charts, pods, and deployments that were deployed to the Kubernetes cluster.
The Kubernetes-driven platform design is based on VMware and Rancher technologies and has been implemented by means of Terraform.
vSphere, a cloud computing virtualization platform from VMware, provides the basis for low architecture level and unites all servers into a single system. On top of that we applied Terraform to create and provide Kubernetes clusters, and Rancher Kubernetes Engine (RKE) to manage them.
The clusters’ Control Plane is shared between two data centers, enabling automatic switching and traffic redirection in case the active DC fails. This scenario has been thoroughly tested as one of the critical customer requirements.
The network connectivity between container workloads has been implemented with Cilium CNI. By using the Cilium CNI network plugin we created a connectivity model with the fewest privilege’s access, and includes awareness of Layer 7 communications, thereby further enhancing the network security.
The robust monitoring system is designed to comply with strict security regulations. All the communication and behavior between microservices and the components inside the Kubernetes cluster is tracked. Any anomalous or potentially harmful behavior is detected and immediately reported to the security department. Aggregation of security reports is based on Falco rules; the reports are further streamed into the existing SIEM system.
It was essential to meet BoG’s high standards, in particular for network security. The bank operates several data centers that work under complex rules. The rules also define how the microservices communicate and connect with each other. As a result, our system had to be designed in accordance with these rules in order to provide secure connection, which was architecturally challenging. In addition, the project was required to be completed within a short timeframe. This made things even more complicated as almost everything had to be designed from scratch.
As a bank, BoG also has strict regulations in terms of working with other contractors and third-party teams. SHALB specialists had no access to the production site and had to prepare some solutions on their side before BoG engineers could implement them on theirs. This sometimes required double the work to perform and inevitably slowed the whole process down.
With one of the largest IT departments in the region, BoG has strong engineering teams that are very good at what they do. Considering the project complexity and scale, we were happy to join forces with their qualified staff and work together on some technical issues.
In particular, our team was stuck on the problem of how to properly configure Rancher to create servers on the VMware side in order to pass data to other systems that run on these servers under Kubernetes management. Finally, thanks to technical advice of BoG VMware-certified engineers, we managed to solve the problem and move forward.
The project is trailblazing both in terms of the application field (banking sector) and technical implementation (architecture and configuration of services). Despite the technologies in use being actively developed, there is still room for new features and components, both in Kubernetes and Rancher. Our custom solution covers the missing functionality and makes it work regardless of how this stack implements the features that the customer needs.
Also, the solution is set up on the customer’s own data centers and is based on the VMware virtualization platform, although normally such systems are designed for public clouds or their on-prem analogues like OpenShift.
On completion of the project, the customer received a flexible and up-to-date platform, and all the tooling needed to launch, scale, deploy and destroy clusters on it. BoG specialists duly appreciated the advantages of Kubernetes in terms of scaling and quickness of deployments: first, it allows for automatic scaling in times of higher demand and downscaling when demand is reduced, and second, it significantly accelerates deployments making them a lot faster than before. According to Vazha Pirtskhalaishvili, they noticed the difference almost immediately after the project implementation.
Commenting on the project, Vazha Pirtskhalaishvili confirmed:
Working on the BoG project gave us an inspiring DevOps experience and invaluable know-how of how to deploy cloud native in the fintech sector. Now we are ready to share this knowledge with you! Invest in modernizing your systems today to future-proof them for tomorrow’s challenges. Drop by for a friendly talk by booking an online meeting or contact sales@shalb.com for more information.
The post Modernizing infrastructure stack for the largest bank in Georgia appeared first on SHALB.
]]>