DHCP is an acronym for dynamic host configuration protocol, a mechanism which gives network devices an IP address. When a new device joins a network DHCP recognises the devices and gives it an IP address automatically, according to DHCP rules for the network.

This makes matters easier for administrators: this is because the network administrator does not have to manually assign an IP address to a new network device whenever a device is added. In some instances, including public Wi-Fi, new devices are regularly added to a network so manual IP assignment would be unpractical.

Your home network router, for example, will automatically make use of DHCP to assign IP addresses. However just because it is enabled for DHCP does not mean that it acts as a DHCP server.

Understanding how DHCP works

It’s fantastic when everything on a network simply works, including your local network printer for example – and you might wonder what it is that makes sure that network devices are always addressable.

First, every network device has a unique identifier called the MAC address which is assigned in the factory. It’s possible to assign a static IP address on a server to a specific MAC address. So, whenever a device such as a printer reboots it simply resumes the static IP address that’s been assigned to it.

Printing the network configuration of your printer will show you that DHCP is enabled, and that there is no static IP address assigned  – this is because IP assignment is in fact done by the DHCP server.

Advantages and disadvantages of DHCP

Using DHCP to handle IP address assignment is very convenient, but DHCP is also known for being very reliable. DHCP’s stability comes from a couple of built-in techniques applied by the protocol: failover, renewal and rebinding.

The DHCP server gives each DHCP client a “lease” on an IP address, and this lease must be renewed at some point. A client will try to renew the DHCP lease halfway through the lease period. If the DHCP server is down the client will keep trying to renew the lease by sending repeated renewal requests.

However there are some security concerns with DHCP because the DHCP protocol does not have any authentication procedures. For this reason DHCP is vulnerable to a range of cyber attack strategies.


FTP ( File Transfer Protocol ) is the oldest one around. Its original specification was put together by Abhay Bhushan and released on April 16th, 1971. It’s been updated many times since and the latest version lets it support IPv6.

FTP is a standard network protocol which most people know is used to move files from one host (machine or operating system) to another over a TCP/IP-based network. FTP is usually used to move files up or down to a server. Unfortunately, this means that using FTP with its default settings offers very poor security. That’s because it lets users authenticate themselves using only clear text, which is vulnerable to a Man in the Middle Attack, among others.

For secure transmission, where content, username and password are protected, FTP is used with SSL/TLS or even replaced with Secure File Transfer Protocol ( sFTP ).


http://  is what always needs to sit at the front of a web address. If you want to get into a site or insert a link, that’s the way it has to look. Browsers and apps will often make it easier for you by adding this bit in, but no matter who puts it there, it’s essential. Tim Berners-Lee created the World Wide Web in 1990 when he came up with HTTP ( HyperText Transfer Protocol), and it’s the protocol that the whole web rests on.

When you tap in a web address into your web browser, what you’re actually doing is sending an HTTP request to the Web server for data from that site. The protocol’s main job is to transmit hypertext data. HTTP uses the client-server model (typically via web browser) by forwarding requests to a server that then sends the content from the requested website. Alternatively, it sends an error message if the page can’t be found. When software needs to access Internet content it makes use of the HTTP protocol.

One of the things to remember about HTTP is that it’s a stateless protocol, meaning that it doesn’t save session information or details about whoever has participated in the communication, so every request that goes to the server gets treated separately. However, there are applications which can gather information about subsequent requests, allowing them to track user activity. Cookies, HTTP sessions, JavaScript and others are examples of apps with tracking capabilities. HTTPS (or HTTP Secure) is the most popular way of securing an HTTP connection, and it uses the SSL or TLS encrypting protocols.


HTTP/2 ( HTTP/2.0 ) is the revision of the good old HTTP network protocol. Google’s SPDY experimental protocol was the  base for HTTP/2. Httpbis is the Hypertext Transfer Protocol working group that developed new protocol.

HTTP/2 has numerous goals – a negotiation mechanism that allows servers and clients to choose between  HTTP 1.1., 2.0 or other non-HTTP protocols; improve page load speed by using data compression of HTTP headers; pipelining of requests; multiple requests multiplexing over the same TCP connection. Keeping high-level compatibility with old HTTP 1.1 is also among priorities of HTTP/2.

HTTP/2 works with http and https-based uris. Encryption de-facto becomes a standard.


HTTP/3 ( H3 ) is the 3rd and the latest revision of HTTP ( Hypertext Transfer Protocol ). HTTP/3 is based on Google’s “HTTP-over-QUIC”.  Cloudflare and Chrome already supports HTTP/3, Firefox will follow this trend in late Q3 of 2019.

One of the core features of HTTP/3: instead of using TCP as the transport layer, HTTP/3 is using QUIC, which introduces streams as a ‘priority citizens’ at the transport layer. QUIC streams share the same QUIC connection – it means no extra handshakes and delayed starts are necessary to create the new ones. The packet loss of one stream is not affecting others – QUIC streams are delivered independently ( QUIC packets are implemented on top of UDP ).


IMAP is short for Internet Message Access Protocol, and it’s a popular alternative POP3. That’s another message protocol. IMAP differs from POP3 because it keeps all your emails on their server until you decide to delete them. It’s generally considered to be better for accessing your emails online. They are only saved to your computer if you decide to download them. You can choose which protocol you prefer and whether or not to download your emails when you’re setting up your email client.

IMAP lets you get at your emails from anywhere, so long as you have an Internet connection. Your emails live on the mail server, so they are effectively backed up and safer than if you keep them yourself. Of course, the downside is that you do need a reliable connection, so, if you live somewhere remote then you might want to take that into account. IMAP can give simultaneous mailbox access to multiple users and it comes with additional options for email management functions like search, email state information, multiple mailboxes, shared folders etc.


The Internet Protocol ( IP ) is a protocol for addressing and routing the data packets in order let them move across networks and get to proper destinations. IP data is attached in each data packet – so, this sort of data helps routers to transmit packets to the proper place. In fact, every device and domain which connects to Internet has IP address assigned. Data packets are directed to the certain IP attached to those packets, the data goes where is supposed to go. Upon arrival to destination packets are treated differently depending on the transport protocol used with the IP. The most typical transport protocols are TCP/UDP.

On the public internet all IP addresses are both managed and assigned by the IANA, the Internet Assigned Numbers Authority. The IANA delegates its responsibilities to five separate Regional Internet Registries (RIRs). RIRs are globally co-ordinated, collectively responsible for managing IP addresses across the globe. The RIRs allocate IP addresses to ISPs and other entities which are located in their respective regions.

In the present moment there are IPv4 and IPv6 versions of IP protocol, which was initially presented in 1983.


IPv4 is absolutely essential to the smooth functioning of the internet, and is currently the most widely used version of the internet protocol (IP). IPv4 is responsible for making sure data packages can be transmitted across the internet, and for locating servers (hosts) on the internet. Though there are two editions of internet protocol currently in use (IPv4 and IPv6) it is still IPv4 which is the most commonly used. Note that specific ranges in the IPv4 space are reserved by the authority that controls IP ranges, IANA, for purposes including for use in private networks or for multicast.

How an IP address works

Internet protocol (IP) is what facilitates communications between different devices on a network. The IP protocol gives every device on a network a unique numeric identifier in the shape of an IP address which defines where the device is on the internet. Servers on the internet will usually have a static, in other words permanent, IP address so that these devices can always be easily accessed at the same address. PCs and mobile devices are usually assigned dynamic IP addresses by the DHCP servers on the network that the devices are making use of, these dynamics addresses can easily change.

Understanding IPv4 addresses

Each IPv4 address has 32 bits, which is 4 bytes, and is written using decimal numbers to make it easy to read. There are four 8 bit segments written in decimal numbers which range from 0 to 255, each of the four segments is separated by a period. The total number of IPv4 addresses available is just over four billion.

Currently most of these IPv4 addresses are assigned, which is why an extended IP space – IPv6 – has been developed. IPv6 is a 128-bit number rather than the 32-bit number that makes up an IPv4 address. As regards to IPv4, the last eight IPv4 address blocks were assigned in 2011. Some companies are already switching from IPv4 to IPv6.


The internet is running out of available IPv4 IP addresses. IPv4 was the original IP address range, and was designed when nobody thought the internet will be as big and as important as it turned out to be. IPv6 was designed to replace IPv4 because IPv4 is limited to using 32 bits for internet addresses. As a result, IPv4 can support only 2^32 addresses, which has turned out not to be enough.

In contrast, IPv6 supports far more public IP addresses – 2^128 in fact. It is not likely that the internet will run out of the number of addresses in the IPv6 address space. But IPv6 provides more than just an increase in IP addresses, IPv6 also improves on the features included in IPv4. These improved features include packet headers which are more efficient, address autoconfiguration which is stateless as well as security in the shape of IPsec.

Unlike the IPv4 address system, IPv6 addresses are hexadecimal, with eight groups of four digits that are separated using colons. An example of a fully written out IPv6 address would be: 2001:0db8:86a3:0000:0000:8a2e:0380:7334.

Keeping the internet up and running smoothly implies a gradual shift from IPv4 to IPv6, rather than trying to move to IPv6 suddenly. The two protocols are very similar to each other, but in reality IPv4 and IPv6 are two completely independent networks which are running in parallel. The only way traffic flows between the two is via tunnelling and special gateways.

Internet devices and services including personal computers, servers and internet routers must all be configured to work with IPv6, rather than just IPv4. In many cases devices can simply be updated with a firmware or software upgrade. Nonetheless doing these upgrades will be costly, particularly where large numbers of device are involved, and sometimes updates simply won’t be possible. As a result, currently, only  a small proportion of the internet supports IPv6.


iSCSI is short for Internet Small Computer System Interface. It’s an Internet Protocol standard for connecting data storage devices via a network. It permits SCSI commands to be sent over IP networks across great distances to manage storage. iSCSI is significant because it makes data storage and transmission faster and more flexible. It’s a TCP/IP-based protocol that lets SCSI function on top of the TCP layers. Packet delivery with iSCSI is different from how it is with IP, which always happens in a particular order. It sends the same commands used by SCSI software, but it sends them over the network. This is equally true for local area network (LAN) and wide area network (WAN) applications. iSCSI links all kinds of storage devices across a network and lets you treat them as if they’re local.

iSCSI Initiators

The iSCSI initiator works as an iSCSI client. On a PC it functions just like a SCSI bus adapter, but rather than linking physically with the SCSI devices, the iSCSI initiator transmits data over the network. Initiators can be of the software type, which uses code to emulate iSCSI. This is usually done in a software driver. It uses the networking hardware that’s in place to load SCSI devices for a PC using the iSCSI protocol. There are software iSCSI initiators to suit operating systems (like an iSCSI Windows Initiator, for instance) and they’re the most widely used way of setting up iSCSI. A hardware iSCSI initiator uses dedicated hardware, usually with integrated software. There are also hardware initiators, which aren’t as slow as iSCSI and don’t come with the as much chance of network interruptions. That’s one reason why services using a hardware iSCSI initiator can see increased performance.

iSCSI Downsides

One of the downsides of iSCSI, particularly for resource-heavy applications is the extra latency. The problem with wrapping SCSI packets around TCP/IP protocols is that it slows things down a tad. It also makes it hard to ensure a high-quality service and decent performance on mixed networks. For instance, if VoIP, software iSCSI, email and Excel spreadsheets are using the same connection without some form of QoS for performance, the results may be disappointing. By contrast, Fiber Channel SANs will likely only have disk traffic on that network, so performance will be much better.