usability Archives - XI Vero https://www.xivero.com Software Conference Thu, 03 Feb 2022 18:57:01 +0000 en-US hourly 1 https://wordpress.org/?v=5.9 https://www.xivero.com/wp-content/uploads/2022/02/cropped-conference-32x32.png usability Archives - XI Vero https://www.xivero.com 32 32 Fintech system in SaaS for 2000+ microservices https://www.xivero.com/fintech-system-in-saas-for-2000-microservices/ https://www.xivero.com/fintech-system-in-saas-for-2000-microservices/#respond Wed, 01 Sep 2021 18:48:07 +0000 https://www.devsnews.com/wp/techbuzz/?p=64 Let's talk about architecture in BigTech, technology selection and accepted "rules".
We will touch on the topic of freedom in adopting architectural solutions for the Product and Core teams.

The post Fintech system in SaaS for 2000+ microservices appeared first on XI Vero.

]]>
Let’s talk about architecture in BigTech, technology selection and accepted “rules”.
We will touch on the topic of freedom in adopting architectural solutions for the Product and Core teams.
We’ll get into Fintech solutions created as an important component of global SaaS: microservices, APIs and tags, Event Sourcing, Feature Toggles, SDLC, CI/CD, DevOps, monitoring, analytics, etc.

Global companies have long used microservices. For example, the monolithic applications of Amazon, Coca-Cola, and Netflix at some point evolved into larger infrastructures. Brands have benefited from this decision and attracted even more audiences. But trending doesn’t mean monoliths are yesterday’s day. My team and I are not used to blindly chasing new trends. We always analyze when one or another option is effective and how it is safer to switch to it.

Our fintech project was built on a monolithic approach. This approach resembles a Rubik’s cube: if you take one piece out of it and assemble a new form or add other components, the cube will no longer work fully. Each element forms a single functionality. If any part is missing, broken or standing out of place, the colored box will not add up.

Why did you choose the monolith? First, it allows you to launch the project faster in a startup. When you have to present MVP in a month, but you have no specific requirements or product specifications, Monolith is the only savior. Its flexibility is manifested in the variety of tools that can be integrated to simplify development. In addition, changes or updates can be deployed at once rather than individually. Second, monolith is easy and fast to scale at the start. For our team, the benefits were clear.

More specialists, including newcomers, can join development on monolith. It is simple and straightforward to use. In such an application, all components are interconnected and interdependent. It will be much easier for any novice to understand the code and logic implemented in monolith than in microservices.

The post Fintech system in SaaS for 2000+ microservices appeared first on XI Vero.

]]>
https://www.xivero.com/fintech-system-in-saas-for-2000-microservices/feed/ 0
Why do we all make bad architecture and how to stop doing it? https://www.xivero.com/why-do-we-all-make-bad-architecture-and-how-to-stop-doing-it/ https://www.xivero.com/why-do-we-all-make-bad-architecture-and-how-to-stop-doing-it/#respond Fri, 23 Apr 2021 17:57:38 +0000 https://www.devsnews.com/wp/techbuzz/?p=57 We will look at the types of errors in the approaches to the design of large systems, which lead to serious or even catastrophic consequences for business.

The post Why do we all make bad architecture and how to stop doing it? appeared first on XI Vero.

]]>
We will look at the types of errors in the approaches to the design of large systems, which lead to serious or even catastrophic consequences for business. There will be interesting real-life catastrophe shows and analysis of their causes from people who professionally deal with technical due diligence of companies and work as consultants in the field of fixing problematic architectures.

The world became more complex in the mid-1990s. Companies coveted web applications that ran on the intranet to get rid of desktop deployments. And applications had to serve multiple departments, and sometimes even go beyond the company’s borders. A new paradigm, component-based development, also known as CBD, was established. It promised us reusability, scalability, flexibility, and the ability to extract code (usually written in COBOL). We started breaking down our systems into large functional parts and worked very hard to get these components to start communicating with each other. Java was invented, and suddenly everyone wanted to write code in Java (apparently some still do). Components ran on incredible technologies such as application servers and CORBA (look it up on Wikipedia to impress your colleagues). The good old days of object query brokers!

At the time, I was working at a large international bank trying to create a methodology for component-oriented development. Even with a well-armed team of Andersen consultants, it took us three years to write the damn thing. In the end, both the paradigm and the technology proved too complicated to write decent and well-functioning programs. It just didn’t work that way.

Service-oriented architecture.

At that point, in the early years of the 21st century, I thought we had gotten rid of distributed software development and started building web applications. Everyone seemed to bravely ignore Martin Fowler’s first law of object allocation — not to allocate objects. Gradually we moved on to the next distributed computing paradigm, repackaging the promise of component-oriented development into an updated set of technologies. We now started doing business process modeling (BPM) and implementing those processes on an enterprise service bus (ESB), with components providing services. We were in the era of service-oriented architecture, known as SOA.

After CBD, SOA seemed easier. As long as the components — vendors — were connected to the enterprise service bus, we figured out how to build scalable and agile systems. We now had much smaller components that we could extract from existing systems (written not only in COBOL, but also in PowerBuilder, .NET, and Java). The necessary books on development patterns were written, and the world was ready to get down to business. This time we got to pull it off!

This time I was working for an international transportation company, and we were building software around SAP middleware, supplying tools for both ESB and BPM. Now we didn’t just need Java and .NET developers, we also had middleware developers and SAP consultants working for us. And even though Agile was suggested to speed up development (I know, it’s not the right argument), projects were still too slow, moreover, when all the puzzle pieces fell into place, we started to realize that integration testing and deployment of new releases was getting more difficult by the day.

Finally: microservices!

I hope you’ll forgive me for such a long and confusing introduction to the subject of microservices. You may be thinking, “why do we need another article on microservices, isn’t there already enough literature on the subject?” In general, yes, there is enough. But if you look carefully at the stream of articles that can be found on the Internet, most of them only describe the benefits and features of microservices (sing “hallelujah”), some of them describe the few known examples of innovators (Netflix, Amazon, and Netflix, and Amazon, and Netflix…). And only a few articles actually dig a little deeper, and those tend to consist of a summation of the technologies used in implementing microservices. It’s all just getting started.

And here it doesn’t hurt to take a little history. Interestingly, the benefits and capabilities of the predecessors of microservices are still with us. Microservices seem to promise scalable and flexible systems based on small components that can easily be deployed independently, and thereby promote the best technology option for the component. In other words, the same promises we’ve bought into with CBD and SOA in the past. Nothing new here, but that doesn’t mean that microservices aren’t worthy of close consideration.

The post Why do we all make bad architecture and how to stop doing it? appeared first on XI Vero.

]]>
https://www.xivero.com/why-do-we-all-make-bad-architecture-and-how-to-stop-doing-it/feed/ 0
Automating Node.js deployment to a production environment with Shipit on CentOS 7 https://www.xivero.com/automating-node-js-deployment-to-a-production-environment-with-shipit-on-centos-7/ https://www.xivero.com/automating-node-js-deployment-to-a-production-environment-with-shipit-on-centos-7/#respond Fri, 26 Mar 2021 18:50:00 +0000 https://www.devsnews.com/wp/techbuzz/?p=70 Shipit is a versatile deployment and automation tool for Node.js developers. It uses a task flow system based on the popular Orchestrator package, a login system and interactive SSH commands based on OpenSSH, and an extensible API. Developers can use Shipit to automate build and deployment workflows for a variety of Node.js applications. Shipit workflows […]

The post Automating Node.js deployment to a production environment with Shipit on CentOS 7 appeared first on XI Vero.

]]>
Shipit is a versatile deployment and automation tool for Node.js developers. It uses a task flow system based on the popular Orchestrator package, a login system and interactive SSH commands based on OpenSSH, and an extensible API. Developers can use Shipit to automate build and deployment workflows for a variety of Node.js applications.

Shipit workflows allow developers not only to configure tasks, but also to specify the order in which they are executed, whether synchronous or asynchronous execution is required, and the execution environment.

In this tutorial, we will install and configure Shipit to deploy a Node.js application from a local development environment to a production environment. We use Shipit to deploy the application and configure the remote server through the following steps:

Transferring Node.js application files from the local environment to the production environment (using rsync, git, and ssh).
installing the application dependencies (node modules).
setting up and managing Node.js processes on the remote server using PM2.

Prerequisites

Before starting this tutorial, you will need the following:

  • Two CentOS 7 servers (in this tutorial we will use app and web names for them) with a configured private network as instructed in the tutorial Configuring a Node.js application for a production environment in CentOS 7.
  • Nginx web server (on the web server) secured with TLS/SSL as described in the tutorial Securing Nginx with Let’s Encrypt in CentOS 7. If you complete the prerequisites in chronological order, you will only need to complete steps 1, 4, and 6 on the web server.
  • Installing Node.js and npm in a production environment. This tutorial uses version 10.17.0. To install it in macOS or Ubuntu 18.04, follow the Installing Node.js and Creating a Local Development Environment in macOS or the Installation with PPA section of the Installing Node.js in Ubuntu 18.04 guide.
  • Installing Node.js also installs npm, this tutorial uses version 6.11.3.
  • Local development computer with rsync and git installed.
  • On macOS, you can use Homebrew to install them.
  • For instructions on installing git on Linux distributions, see the Installing Git tutorial.

The post Automating Node.js deployment to a production environment with Shipit on CentOS 7 appeared first on XI Vero.

]]>
https://www.xivero.com/automating-node-js-deployment-to-a-production-environment-with-shipit-on-centos-7/feed/ 0