What is middleware?

Middleware is multipurpose software that provides services to applications outside of what’s offered by the operating system. Any software between the kernel and user apps can be middleware.
Analyst and system theorist Nick Gall said, “Middleware is software about software.” Middleware doesn’t offer the functions of a traditional app, it connects software to other software. Middleware is plumbing for your IT infrastructure because middleware allows data to flow from one app to another.

Empire and enterprise

Ancient Rome had one of the most remarkable sanitation systems in history. The complex network of aqueducts and sewers was so important that Pliny the Elder counted them as Rome’s “most noteworthy achievement of all.” Like the aqueducts of Rome carried water, enterprise middleware carries data from place to place. We’re not saying middleware is humanity’s greatest achievement, but a lot of other—perhaps more noteworthy—software can function because of middleware.
icons of the aqueduct as middleware
Plumbing might seem like a humble metaphor for middleware, but both are critical to operating large, complex systems—like Rome. Your enterprise is similar to a rapidly growing city: All parts of the city need water, just as all parts of your enterprise need data. Without plumbing, a city is inefficient and downright messy. Without middleware, your enterprise is the same way.

What kinds of middleware are there?

Why care about middleware?

If data is like water in your company’s plumbing, consider how much better things would be if you didn’t have to get a bucket, travel to the water pump, fill the bucket with water, and lug it back to where you were. Without middleware, that’s what you do every time you want to work. Having data piped anywhere in your enterprise is more convenient and more efficient.

What could you accomplish with your data on demand?

When you integrate your data across applications, you can focus on creating cool new stuff for your organization instead of spending your time on manual processes. With a modern application platform, for example, developers can focus on developing app functionality instead of managing how their app integrates with the rest of the environment.

#Source: https://www.redhat.com/en/topics/middleware/what-is-middleware#


Load Balancing, Affinity, Persistence, Sticky Sessions: What You Need to Know

A load-balancer in an infrastructure

The picture below shows how we usually install a load-balancer in an infrastructure:
This is a logical diagram. When working at layer 7 (aka Application layer), the load-balancer acts as a reverse proxy.
So, from a physical point of view, it can be plugged anywhere in the architecture:
  • in a DMZ
  • in the server LAN
  • as front of the servers, acting as the default gateway
  • far away in an other separated datacenter

Why does load-balancing web application is a problem????

Well, HTTP is not a connected protocol: it means that the session is totally independent from the TCP connections.
Even worst, an HTTP session can be spread over a few TCP connections…
When there is no load-balancer involved, there won’t be any issues at all, since the single application server will be aware the session information of all users, and whatever the number of client connections, they are all redirected to the unique server.
When using several application servers, then the problem occurs: what happens when a user is sending requests to a server which is not aware of its session?
The user will get back to the login page since the application server can’t access his session: he is considered as a new user.
To avoid this kind of problem, there are several ways:
  • Use a clustered web application server where the session are available for all the servers
  • Sharing user’s session information in a database or a file system on application servers
  • Use IP level information to maintain affinity between a user and a server
  • Use application layer information to maintain persistance between a user and a server
NOTE: you can mix different technc listed above.

Building a web application cluster

Only a few products on the market allow administrators to create a cluster (like Weblogic, tomcat, jboss, etc…).
I’ve never configured any of them, but from Administrators I talk too, it does not seem to be an easy task.
By the way, for Web applications, clustering does not mean scaling. Later, I’ll write an article explaining while even if you’re clustering, you still may need a load-balancer in front of your cluster to build a robust and scalable application.

Sharing user’s session in a database or a file system

This Technic applies to application servers which has no clustering features, or if you don’t want to enable cluster feature from.
It is pretty simple, you choose a way to share your session, usually a file system like NFS or CIFS, or a Database like MySql or SQLServer or a memcached then you configure each application server with proper parameters to share the sessions and to access them if required.
I’m not going to give any details on how to do it here, just google with proper keywords and you’ll get answers very quickly.

IP source affinity to server

An easy way to maintain affinity between a user and a server is to use user’s IP address: this is called Source IP affinity.
There are a lot of issues doing that and I’m not going to detail them right now (TODO++: an other article to write).
The only thing you have to know is that source IP affinity is the latest method to use when you want to “stick” a user to a server.
Well, it’s true that it will solve our issue as long as the user use a single IP address or he never change his IP address during the session.

Application layer persistence

Since a web application server has to identify each users individually, to avoid serving content from a user to an other one, we may use this information, or at least try to reproduce the same behavior in the load-balancer to maintain persistence between a user and a server.
The information we’ll use is the Session Cookie, either set by the load-balancer itself or using one set up by the application server.

What is the difference between Persistence and Affinity

Affinity: this is when we use an information from a layer below the application layer to maintain a client request to a single server

Persistence: this is when we use Application layer information to stick a client to a single server
sticky session: a sticky session is a session maintained by persistence

The main advantage of the persistence over affinity is that it’s much more accurate, but sometimes, Persistence is not doable, so we must rely on affinity.
Using persistence, we mean that we’re 100% sure that a user will get redirected to a single server.
Using affinity, we mean that the user may be redirected to the same server…

What is the interraction with load-balancing???

In load-balancer you can choose between several algorithms to pick up a server from a web farm to forward your client requests to.
Some algorithm are deterministic, which means they can use a client side information to choose the server and always send the owner of this information to the same server. This is where you usually do Affinity 😉 IE: “balance source”
Some algorithm are not deterministic, which means they choose the server based on internal information, whatever the client sent. This is where you don’t do any affinity nor persistence 🙂 IE: “balance roundrobin” or “balance leastconn”
I don’t want to go too deep in details here, this can be the purpose of a new article about load-balancing algorithms…
You may be wondering: “we have not yet speak about persistence in this chapter”. That’s right, let’s do it.
As we saw previously, persistence means that the server can be chosen based on application layer information.
This means that persistence is an other way to choose a server from a farm, as load-balancing algorithm does.
Actually, session persistence has precedence over load-balancing algorithm.
Let’s show this on a diagram:
Client request | V HAProxy Frontend | V backend choice | V HAproxy Backend | V Does the request contain persistence information --------- | | | NO | V | Server choice by | YES load-balancing algorithm | | | V | Forwarding request <---------- to the server
Which means that when doing session persistence in a load balancer, the following happens:
  1. the first user’s request comes in without session persistence information
  2. the request bypass the session persistence server choice since it has no session persistence information
  3. the request pass through the load-balancing algorithm, where a server is chosen and affected to the client
  4. the server answers back, setting its own session information
  5. depending on its configuration, the load-balancer can either use this session information or setup its own before sending the response back to the client
  6. the client sends a second request, now with session information he learnt during the first request
  7. the load-balancer choose the server based on the client side information
  8. the request DOES NOT PASS THROUGH the load-balancing algorithm
  9. the server answers the request
and so on…
At HAProxy Technologies we say that “Persistence is a exception to load-balancing“.
And the demonstration is just above.

Affinity configuration in HAProxy / Aloha load-balancer

The configuration below shows how to do affinity within HAProxy, based on client IP information:

frontend ft_web
    default_backend bk_web
backend bk_web
    balance source
    hash-type consistent # optional
    server s1 check
    server s2 check

Web application persistence
In order to provide persistence at application layer, we usually use Cookies.
As explained previously, there are two ways to provide persistence using cookies:
  • Let the load-balancer set up a cookie for the session.
  • Using application cookies, such as ASP.NET_SessionId, JSESSIONID, PHPSESSIONID, or any other chosen name
The configuration below shows how to configure HAProxy / Aloha load balancer to inject a cookie in the client browser:

frontend ft_web
    default_backend bk_web
backend bk_web
    balance roundrobin
    cookie SERVERID insert indirect nocache
    server s1 check cookie s1
    server s2 check cookie s2
Two things to notice:

1/ the line “cookie SERVERID insert indirect nocache”:
This line tells HAProxy to setup a cookie called SERVERID only if the user did not come with such cookie. It is going to append a “Cache-Control: nocache” as well, since this type of traffic is supposed to be personnal and we don’t want any shared cache on the internet to cache it
2/ the statement “cookie XXX” on the server line definition:
It provides the value of the cookie inserted by HAProxy. When the client comes back, then HAProxy knows directly which server to choose for this client.

So what happens?
1/ At the first response, HAProxy will send the client the following header, if the server chosen by the load-balancing algorithm is s1:
Set-Cookie: SERVERID=s1

2/ For the second request, the client will send a request containing the header below:
Cookie: SERVERID=s1
Basically, this kind of configuration is compatible with active/active Aloha load-balancer cluster configuration.

The configuration below shows how to configure HAProxy / Aloha load balancer to use the cookie setup by the application server to maintain affinity between a server and a client:
frontend ft_web
    default_backend bk_web
backend bk_web
    balance roundrobin
    cookie JSESSIONID prefix nocache
    server s1 check cookie s1
    server s2 check cookie s2
Just replace JSESSIONID by your application cookie. It can be anything, like the default ones from PHP and IIS: PHPSESSID and ASP.NET_SessionId.

So what happens?

1/ At the first response, the server will send the client the following header
[sourcecode language=”text”]Set-Cookie: JSESSIONID=i12KJF23JKJ1EKJ21213KJ[/sourcecode]

2/ when passing through HAProxy, the cookie is modified like this:
Set-Cookie: JSESSIONID=s1~i12KJF23JKJ1EKJ21213KJ

Note that the Set-Cookie header has been prefixed by the server cookie value (“s1” in this case) and a “~” is used as a separator between this information and the cookie value.

3/ For the second request, the client will send a request containing the header below:
Cookie: JSESSIONID=s1~i12KJF23JKJ1EKJ21213KJ

4/ HAProxy will clean it up on the fly to set it up back like the origin:

Basically, this kind of configuration is compatible with active/active Aloha load-balancer cluster configuration.

What happens when my server goes down???

When doing persistence, if a server goes down, then HAProxy will redispatch the user to an other server.
Since the user will get connected on a new server, then this one may not be aware of the session, so be redirected to the login page.
But this is not a load-balancer problem, this is related to the application server farm.

Source: https://www.haproxy.com/blog/load-balancing-affinity-persistence-sticky-sessions-what-you-need-to-know/


Setting Up Haproxy As A Load Balancer With Sticky Sessions.

Sticky Sessions:
Some applications require that a user continues to connect to the same backend server. This persistence is achieved through sticky sessions, using the appsession parameter in the backend that requires it.
Source: https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts

In this tutorial, I will show you how to setup haproxy as a load balancer that uses sticky sessions. In case you didn’t already know, haproxy is a reliable and free high-availability load balancer that allows you to distribute web traffic among multiple web servers.

For the purposes of this example, let’s say that we have three servers with three separate IP addresses: – The server that has haproxy installed. This is our Load Balancer. – Our first web server. We will call this Server A. – Our second web server. We will call this Server B.
Note that those IP addresses are fake and that I am only using them as an example.

Sticky sessions.

The problem with using a load balancer is that by default, traffic bounces from one server to the next. For example: If your web server is using PHP’s default session handling, then all session data will be saved to a file in a temporary location on that web server. If a user logs into Server A, his or her session data will be saved in a file on Server A. If the load balancer distributes their next request to Server B, then the web server on Server B will be unable to read their session data. Why? Because the user’s session data was saved on Server A and Server B has no way to access it. There is no synchronization between our web servers as they are completely unaware of each other.

To solve this problem, we configure our haproxy load balancer to use sticky sessions. When sticky sessions are enabled, a client will be “stuck” to a certain web server. i.e. If a user lands on Server B, then he or she will be “stuck” on Server B until the sticky session cookie has expired.

Enabling haproxy sticky sessions.

Note that I have haroxy version 1.6.3 installed on Ubuntu 16.04.3 LTS. On my server, the configuration file for haproxy is located at /etc/haproxy/haproxy.cfg – This may be different on your server.

Let’s take a look at the listen proxy that I have created:

listen my_website_name
    mode http
    maxconn 40000
    balance roundrobin
    option http-keep-alive
    option forwardfor
    cookie SRVNAME insert
    timeout connect 30000
    timeout client  30000
    timeout server  30000
    server ServerA cookie SA check
    server ServerB cookie SB check

As you can see, I have created a listen proxy for my_website_name. There are three important parts to this configuration that are needed for sticky sessions.

cookie SRVNAME insert: This directive enables cookies and tells haproxy to insert the name of the current server into it.

server ServerA cookie SA check: Here, we added Server A to our load balancer. We’ve called it “ServerA” and we have binded it to the IP address and port 80. More importantly for us, we have named the cookie for this server as “SA”.
server ServerB cookie SB check: Here, we added Server B to our load balancer. We’ve called it “ServerB” and we have binded it to the IP address and port 80. In this case, we have named the cookie for this server as “SB”.
Once you have made the above changes, simply restart haproxy for the changes to take effect. If everything is correct, then new users will stick to either Server A or Server B. i.e. Their requests will not randomly bounce between them.

Hopefully, you found this article to be helpful!

Source: http://thisinterestsme.com/haproxy-sticky-sessions/


How to backup/restore VMWARE ESXi 6.x CONFIG

#LastUpdate: #11:30 2018.10.24
#Source: https://graspingtech.com/backup-vmware-esxi-6-5-configuration/

echo "How to Backup the ESXi Configuration"

echo "####################################"
cd /vmfs/volumes/disk_05_ssd_1000gb_ss860evo/HOST.BACKUP/
vim-cmd hostsvc/firmware/backup_config
cp /scratch/downloads/523b564d-e1e8-2a81-99b4-b90ae42ef2fb/* .

ls -lh


[root@srv118:/vmfs/volumes/5bcee239-c58d7222-236e-fcaa14e095ee/HOST.BACKUP] ls -lh

total 1024
-rwx------    1 root     root       18.2K Oct 24 04:30 configBundle-srv118.tgz
-rwxr-xr-x    1 root     root         269 Oct 24 04:30 host-backup-config.sh


#LastUpdate: #11:30 2018.10.24
#Source: https://graspingtech.com/backup-vmware-esxi-6-5-configuration/
#+In order to restore the backup of the ESXi configuration, you need to install the same version and build number of ESXi on your hardware

#+Once connected you will need to transfer the backup archive to "/tmp/configBundle.tgz" on the host using a transfer utility

#+The host will reboot with the old configuration applied. You can now exit maintenance mode and use the host

echo "Restoring the ESXi Configuration"

echo "################################"
# cd /vmfs/volumes/disk_05_ssd_1000gb_ss860evo/HOST.BACKUP/
# vim-cmd hostsvc/firmware/backup_config
# cp /scratch/downloads/523b564d-e1e8-2a81-99b4-b90ae42ef2fb/* .

vim-cmd hostsvc/maintenance_mode_enter

vim-cmd hostsvc/firmware/restore_config /tmp/configBundle.tgz

ls -lh