Deep Dive Into WordPress Toolkit 4.7 Release

WordPress Tookit 4.7 is the third major WordPress Toolkit update in 2020. It’s also the first update developed and released by a team working completely remotely due to the current lock down. We’re happy to announce that we were still able to deliver as planned. Read on to learn what was added in this release.

What’s WordPress Toolkit?

Update of Paid Plugins & Themes

Most WordPress agencies and web developers are using paid plugins and themes in their projects. Same goes for WordPress admins who’re at least semi-serious about their site. The main problem with such plugins and themes was that they’re not hosted on wordpress.org. Thus, WordPress Toolkit couldn’t detect their updates and install them. This deficiency led to a miserable user experience where you could update a bunch of plugins and themes via WordPress Toolkit. But for the rest, you had to go through WordPress itself. The Smart Updates feature also couldn’t update such plugins and themes, hence limiting its usefulness.

I’m not exaggerating when I say that this was the main known showstopper on the critical user path in WordPress Toolkit. This is why I’m very happy to announce that we have removed this showstopper in WordPress Toolkit 4.7. If you can see and install the plugin or theme update in WordPress itself, you can do the same in WordPress Toolkit now. Let’s take a closer look.

Here’s how these updates are displayed in WordPress itself:

WordPress Toolkit displays these updates in the same way it displays updates for plugins and themes from wordpress.org:

We can update everything that can be updated in WordPress itself in a way familiar to WordPress Toolkit users:

After the update is performed, you can see the version change confirming the update success, same as with free plugins and themes from wordpress.org:

 Just to be sure, let’s check what WordPress itself says about the update:

Alles gut! Smart Updates will also be able to handle these updates in the same way they handle updates of regular plugins and themes. Same goes for automatic updates, if they’re enabled for a particular site.

There’s one important caveat that needs to be mentioned, though. Certain paid plugins and themes require a license to be updated automatically:

Right now WordPress Toolkit will try to update such plugins and themes anyway. And everything will look as if the update was successful. But when you check for updates again, you’ll see that nothing was actually updated. We’re planning to handle this in one of the next WordPress Toolkit releases. It’s not that big of a deal, given that WordPress itself won’t be able to update such plugins and themes. But it’s still something we’d like to iron out in the future.

We hope that the ability to update paid plugins and themes via WordPress Toolkit will make life easier for many WordPress pros. 

Ability to Disable wp-cron.php Execution

WordPress has its own scheduled task responsible for handling time-based jobs like checking updates, publishing scheduled posts, and so on. This script (wp-cron.php) is run every time a page on a WordPress site is accessed. Such behavior might be fine for websites with low traffic, but when your site gets more popular, the strain caused by running this task too often can lead to reduced server performance. That’s why many WP pros recommend to “disable wp-cron” — a short way of saying “turn off the default way of executing wp-cron.php and instead run an external scheduled task on a specific predefined schedule”.

To facilitate this operation, we have added a one-click switch on each website’s card:

Turning the switch on will automatically create a scheduled task that runs wp-cron.php every 30 minutes. It will also disable the default wp-cron execution by adding a specific line to wp-config.php file. 

If a user has the permission to manage scheduled tasks, a ‘Setup’ link will appear in the interface:

Clicking the link takes you to editing the parameters of the external scheduled task in the native control panel interface. If you want to run the task on a different schedule, you can modify it on this screen, using familiar Plesk controls:

The task created by WordPress Toolkit is a standard scheduled task that can be accessed on the Scheduled Tasks screen in Plesk at any time:

This is done to ensure operational transparency and the ability to easily manage the parameters of the scheduled task. To ensure the robustness of this system, WordPress Toolkit is regularly checking whether the scheduled task still exists. If a user accidentally deletes this task, WordPress Toolkit will recreate it very soon. 

Disabling wp-cron is a well-known trick in the WordPress community, so some users might’ve manually done that already. If this is the case, the WordPress Toolkit will detect the changes in the wp-config.php file, and the state of the corresponding switch will be adjusted automatically. However, we can’t reliably tell which scheduled task was manually created by users to handle the launching of wp-cron.php, so users might have two scheduled tasks running. The solution is simple: you can either leave both tasks running (shouldn’t be a big deal in terms of performance). Or you can remove your own old task and modify the task created by WordPress Toolkit to make it work the way you want it to work.

Server administrators also have the ability to enable this option by default on all new WordPress installations. This is done by selecting the corresponding checkbox in the global WordPress Toolkit settings:

Toggling the feature off on a site will update the site’s wp-config.php file and remove the corresponding scheduled task, so if something starts working incorrectly, the changes are fully reversible. 

UX Improvements 

WordPress Toolkit 4.7 includes two more changes requested by users. Both are UX improvements that should make working with WordPress Toolkit more comfortable for all users.

Remember the site labels introduced in WordPress Toolkit 4.5? Now you can filter all your sites on the ‘Installations’ tab by these labels, making it easier to work with a large number of sites:

If you’re looking at your Updates screen and wondering what exactly is included in a particular plugin or theme update, you can now click on the ‘Changelog’ link to open the corresponding page:

Bugfixes & Other Improvements

We have synchronized the timeouts between WordPress Toolkit and our screenshot service to make sure less screenshots end up in limbo because of miscommunication between the two systems. We have also fixed several customer bugs, including a serious bug that prevented updates of remote WordPress sites connected via our plugin. Our next release will include more fixes of customer-reported bugs from our backlog, with some internal improvements to boot. 

WordPress Toolkit for cPanel and Future Plans

Behind the scenes we are continuing to develop WordPress Toolkit for cPanel. So far, the team has finalized the development of cloning functionality, added the ability to remove WordPress websites, and introduced the Smart Updates feature. 

On the cPanel integration front we’re going to add the Data Copy feature, introduce additional security measures, start working on the product licensing, and work on a number of internal improvements. When it comes to new customer features, we already have a shortlist of candidates to choose from. And they all look very promising. We can’t wait to unveil new stuff for our customers.

Thank you for reading all the way to this point. We hope that your WordPress Toolkit experience will continue to improve. And we’re already hard at work to make it happen. Until next time!

Next Level Ops Podcast: Plesk’s Francisco Carvalho Gives the Scoop on the Partner Program

Next Level Ops Ep 2 Visual

Hello Pleskians! We’re back with another episode of the Official Plesk Podcast: Next Level Ops. This week, Superhost Joe sits down with Francisco Pereira Carvalho, the Head of Sales at Plesk. As the Pleskian Wizard for Partner Experience, Francisco gives us the details about Plesk’s Partner Program.

Success Comes from Knowing How Different Countries Do Business

Partners Program Visual

It’s the second episode of the Official Plesk Podcast: Next Level Ops. Joe and Francisco discuss essential details about the Partner Program. As well as the myriad of factors to consider when working with different partners. “We have over 1,600 partners globally.”, mentions Francisco. The essence of successful partnerships is to understand what’s important for partners in different countries and regions. “What we would like to do is to give people and entrepreneurs who would like to start a business an easy and intuitive tool to integrate into their workflow and systems.”, says Francisco.

“What we would like to do is to give people and entrepreneurs who would like to start a business an easy and intuitive tool to integrate into their workflow and systems.”

Francisco

Key Takeaways

  • Who can apply? Businesses in need of more than 5 licenses can consider being part of the Plesk Partner Program. You can take advantage of high-level support and discounts.
  • Are there success stories? There are lots of partner success stories. Many partners started as solopreneurs and swiftly grew to become strategic partners. You can read our latest partner success story here.
  • What are the benefits? Partners benefit from an easy and intuitive tool to integrate into their workflow. Fast support is a given – and provided in 7 languages. Our team takes into account the needs of partners belonging to different cultures.

It’s time to hit the play button if you want to hear the rest. Join Joe and Francisco as they talk about what it takes to become a Plesk Partner, the benefits of the program, and how it can help you. You can also listen to the episode directly on Simplecast.

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Francisco Pereira Carvalho

Francisco is the Head of Sales at Plesk.

Remember to update your daily podcast playlist with Next Level Ops. And stay on the lookout for our next episode.

How Beebyte Became a Leading Swedish WordPress Hosting Provider

There are two options when choosing a hosting service provider: pay a huge amount of money to a very large provider and sacrifice control and service. Or work with a company like Beebyte, a smaller provider, and get high performance, flexibility, and great customer service. All at a highly competitive rate. 

Beebyte has been a Plesk partner for a year and a half now. It offers a unique and custom-built control panel for virtual servers, VPS, and shared hosting. Recently, we’ve decided to celebrate Beebyte’s journey to success by talking about the business and its partnership with Plesk.

Discover Plesk Partner Program

The Start of Beebyte and its WordPress Solution

Beeybyte Visual

Like many Plesk partners, Beebyte has a small but dedicated team. However, it started with just two people – Niclas Alvebratt and Simon Ekstrand, running everything—including 24/7 support. Having used it on other projects, they have always been Plesk-friendly. Knowing there was a good market for it in Sweden, where the business is based, they decided to set up a small hosting business under the name Beebyte.

Thanks to their previous experience and network, Beebyte got off to a flying start. Knowing that Plesk helps entrepreneurs get started with small business, setting up a business with Plesk was a no-brainer.

After some time and gaining a certain volume of customers, Beebyte grew to a four-person team and started using Plesk’s WordPress Toolkit. Both Plesk and Beebyte attended WordCamp Nijmeijin in 2019. Earlier Beebyte had attended WordCamp Stockholm and WordCamp Norrköping and gained a good reputation within the Swedish WordPress community.

Beebyte now offers Shared WordPress hosting as one of its main, high-performance solutions. With the help of its solutions, the PHP code is processed faster and visitors get a better experience with a faster page. In Beebyte’s web host, there are smart features for WordPress management such as staging and copying sites, installing and updating free SSL certificates. You can also secure installation with Beebyte’s WordPress security tools and manage updates directly from the web host’s control panel (based on Plesk). Additionally, Beebyte delivers its in-house developed monitoring engine and great customer service within a single pane of administration for both end-users and resellers.

How does Beebyte use Plesk?

Beebyte Dashboard

Beebyte has over a decade of experience on both Windows and Linux platforms. It offers high availability, 100% SSD-based servers and shared hosting. With all services delivered from its state-of-the-art, environmentally friendly data centers in Sweden. As well as senior consultancy services at very competitive rates.

One of the ways Beebyte uses Plesk is to help reduce the number of customer support tickets. Since using Plesk, the tickets have drastically decreased, because a ticket based on the platform can be solved super-fast. On the technical side, Beebyte has a secure, solid platform – but non-tech-savvy customers may struggle. However, with Plesk, the company can resolve everything quite fast.

It also uses Plesk with an automated reseller program, where their customers can resell their full-service catalog. Customers can also add their own branding, and handle payments in a very flexible way. You can find more info on the Beebyte and Plesk combined offering here.

What has Beebyte achieved since Plesk?

Today, Beebyte is at the top when it comes to WordPress and e-commerce hosting in Sweden. Regardless of whether it’s shared hosting, VPS, or multi-server load-balanced solutions. Thanks to its success, Beebyte can focus on delivering new features to all its users. Such as the Iris monitoring tool that combines with Plesk and is neatly integrated into its user portal.

Beebyte is also committed to running a sustainable business that is part of a greater whole. For example, every month it gives employees one day off to engage in social work in a local area, such as helping the homeless or night patrolling. Driven by the belief that a healthy conscience goes hand in hand with a profitable business. 

In line with this approach, the founders also support the Free Software Foundation and the Tor network to help communities in countries with authoritarian regimes with high censorship and privacy concerns.

Become a Plesk Partner

Cross-Origin Resource Sharing In Simple Words

CORS - Cross-Origin Resource Sharing

It might be a new concept to you, but CORS or Cross-Origin Resource Sharing is an important way for a restricted resource on a specific web page to get requested from a website which sits on a different domain.

While websites can easily embed (request) assets such as CSS, images and videos and scripts from a different origin (cross-origin), there are restrictions on other requests. For example, cross-domain requests referring to Ajax is generally blocked because sites usually apply a security policy that limits requests to same-origin requests. Cross-origin requests will be barred by default.

CORS is a way for servers and browsers to determine whether accessing resources cross-origin is safe. Because apps can determine whether cross-origin access is safe it means that developers have more freedom to implement website functionality – compared to a situation where they are restricted to requests that are same-origin. CORS aims to be more secure than a simple cross-origin request that is not subject to any limits or restrictions.

The Fetch Living Standard, specified by WHATWG, contains the CORS specification and explains how a CORS implementation should work. CORS was also included in an earlier Recommendation from the W3C.

Understanding the functionality of CORS

In essence, by describing a new type of HTTP header, CORS enables a browser to request a URL that is remote – assuming it has permission to do so. Validation and authorization can be performed server-side to some degree, but usually the browser will take the role of supporting CORS headers and adhering to any specified restrictions.

Where a request can modify data (whether it is an HTTP or an Ajax request) the CORS specification will ask that the browser applies a “preflight”. This means that the browser will ask for supported methods from the server using the HTTP OPTIONS method. Once these “arrive” the actual request will be sent using the usual HTTP method. A server has the option to advise the client that credentials such as HTTP authentication information or a cookie is sent to verify a request.

An example of CORS in action

Let’s say someone wants to visit the website http://www.supersite.com, with the web page then trying to put in place a cross-origin HTTP request which gets data from http://help.supersite.com. If the user’s browser is supportive of CORS it will try to make a valid cross-origin HTTP request to help.supersite.com by doing the following:

  1. A GET request is sent by the browser, and the browser includes an extra “origin” HTTP header which is sent to help.supersite.com, this header contains the domain of the parent that sent the original request.Origin: http://www.supersite.com
  2. The sever that gets the request over at help.supersite.com could respond with one of the following:
    1. The data that was requested from www.supersite.com alongside an ACAO header (that’s an access control allow origin header) which implies that it accepts requests from the origin, so this could look like:Access-Control-Allow-Origin: http://www.supersite.com
    2. The data, alongside an ACAO header which says that it accepts a request from any domain, which looks like this:Access-Control-Allow-Origin: *
    3. A page throwing up an error, stating that it does not accept cross-origin requests

Sometimes an API response or a page is considered to be public content which can be accessed by anyone, and that’s where the wildcard response is used so that code on any site can access it. For example, take Google Fonts and freely available web fonts that are hosted on this service for the public.

The object-capability model is another place where wildcard origin policies are used because the pages have URLs that simply cannot be guessed – and the idea is that any who knows the unique URL should be able to enjoy access.

There is a special value of “”. This means that requests cannot supply credentials – this prohibits the use of client-side SSL, HTTP authentication or even the use of cookies – none of these are allowed in the request that’s sent cross-domain.

CORS architecture states that the ACAO header must be set by an external service – it’s not set by the application server which sends the original request. For example, help.supersite.com would make use of CORS to permit a browser to send a request from www.supersite.com.

An example of CORS preflight

Modern CORS-supporting browsers will require an additional “preflight” request when some cross-domain Ajax resources are requested. This is done to determine whether the browser has the permission to take the action it wants to take. CORS preflights exist because of the implications on user data.

OPTIONS /

Host: help.supersite.com

Origin: http://www.supersite.com

Let’s say then that help.supersite.com is happy to take the requested action it could send back these HTTP headers:

Access-Control-Allow-Origin: http://www.supersite.com

Access-Control-Allow-Methods: PUT, DELETE

The next step is for the browser to then make the request it originally wanted to make. On the flipside, if help.supersite.com does not allow cross-site HTTP requests it will simply send back an error when the OPTIONS request is sent – as a consequence, the browser won’t make the requests it originally intended to make.

Headers specific to CORS

The HTTP headers used by CORS include

Request Headers

  • Origin
  • Access-Control-Request-Method
  • Access-Control-Request-Headers

Response headers

  • Access-Control-Allow-Origin
  • Access-Control-Allow-Credentials
  • Access-Control-Expose-Headers
  • Access-Control-Max-Age
  • Access-Control-Allow-Methods
  • Access-Control-Allow-Headers

Which browsers support CORS?

Given that CORS is a relatively new technology that is dependent on browser support it’s worth noting that you need an up to date browser to use a website that makes use of CORS. For example, Chrome 28+ will work, and so will Opera 15+ and Android 4.4+. Some Presto-enabled browsers – that’s Opera – will work, while Geck from 1.9.1 onwards works. Every version of Microsoft Edge also supports CORS.

The history of CORS

Three staff members at Tellme Networks, Michael Bodell, Matt Oshry and Brad Porter originally started working on CORS in March 2004 as they intended that it should be included in VoiceXML version 2.1. The idea was to make sure that VoiceXML browsers can safely make data requests that are cross-origin.

However, the mechanism that they developed clearly had wider applications beyond VoiceXML and that’s why CORS was subsequently added to an implementation NOTE. W3C’s web apps working group, alongside major browser developers, started formalising the specification into a working draft – which was eventually put on track to be formally recommended by W3C.

The first W3C working draft was submitted in May 2006 and nearly three years later the draft was given it’s name – CORS. It was only in the beginning of 2014 that CORS was formally accepted as a recommendation by the W3C.

Differences between JSONP and CORS

It’s worth noting that CORS can be seen as a better and more contemporary solution compared to JSNOP patterns. CORS offers a number of advantages which include the fact that it supports a range of HTTP requests – it’s not restricted to GET, like JSONP.

Furthermore XMLHttpRequest can be used in the context of CORS and this means that CORS can handle errors better than JSONP. It’s also worth noting that JSONP is susceptible to cross-site scripting problems (XSS) where the external website is hacked, in contrast CORS enables a site to parse responses manually which in turn improves overall security.

However, JSONP does have the advantage that it is better supported by legacy browser where CORS is not yet supported, but it is of little consequence today because CORS is now widely supported by everyday browsers.

Understanding Accelerated Mobile Pages (AMP)

AMP - Accelerated Mobile Pages

AMP, or accelerated mobile pages, is an open source framework that is commonly used on mobile devices – you’ve probably used it while searching for content without even knowing it. AMP was developed by Google, Twitter was also involved. The aim of AMP is to give mobile users better and faster experiences. AMP does this because it gives developers the option to simplify both CSS and HTML so that mobile users get a more “lightweight” experience.

Facebook started the trend with its Instant Articles and Google responded with AMP. However, AMP has gained a bigger foothold and today it is a commonly used way to deliver mobile content from search results at a much faster speed – compared to serving a standard page. In fact, AMP has become so prominent and so popular that Google has been pushing to get AMP included in the web standards framework.

How AMP works its magic

There are basically three main components to AMP. Two are code related, AMP’s version of HTML, and AMP JS. There is also a content delivery network (CDN) working behind the scenes with AMP. Let’s take a look at the three components:

  • HTML. Called AMP HTML, consider it a slimmed down version of standard hypertext markup language (or HTML). In essence, AMP HTML restricts developers in terms of which HTML tags they can make use of.
    Some HTML tags are restricted when using AMP and the goal of these restrictions are to improve page load speeds. The same goes for CSS, AMP also limits the CSS tags you can use. There is a full list of AMP HTML tags available which most seasoned developers will recognize straight away.
  • Developers cannot use JavaScript when coding AMP pages – again, like the HTML restrictions, the goal is to cut out code that can make a page load slowly. However, developers can use AMP scripts. AMP scripts are optimized so that pages load quickly. So, even though you cannot use JavaScript with AMP you can rely on AMP’s library of components – making animations, using dynamic content and editing the layout all using the AMP library of components. You’re even covered for data compliance.
  • Content delivery. AMP has a content delivery network (CDN) that speeds up the delivery of web content – it’s called AMP Cache. It’s proxy-based and acts as a cache, storing all valid AMP content – you cannot, by default, opt out of using AMP Cache. But don’t worry, if you have your own CDN you can still use it by layering it on top of AMP Cache.

Is AMP a good idea for your site?

It depends on what your website is geared to do. If you serve news and other stories that are mostly static AMP can work really well because it’s known that AMP generates more organic search referrals, particularly for media sites. Furthermore, media sites can make their content stand out using Google’s Rich Cards.

E-commerce operators might want to think twice as there’s currently no settled opinion on the value of AMP for retailers. At issue is the dynamic nature of these pages – e-commerce sites involve lots of user interaction such as sorting, filtering and adding goods to a cart.

Nonetheless, there is a general agreement that AMP, used correctly, can achieve the following for website owners:

  • Deliver a big boost for organic search, with much more traffic sent from Google
  • Improve conversion thanks to a much improved mobile experience, AMP can also boost engagement with mobile users
  • AMP CDN also reduces server load because content is served from AMP Cache
  • Implementing AMP can boost your prominence in mobile search results as your site will feature in the AMP carousel

Things to watch out for with AMP

Perhaps the biggest issue with AMP is the fact that it is quite an involved process – setting your website up to serve AMP pages requires a lot of work. While serving better mobile pages should always a priority you should weigh up the benefits against the potential costs of implementing AMP. In fact, putting AMP in place might mean that you run your site “in parallel”, with one set of assets for normal content, and one set of assets for AMP.

AMP also causes difficulty when measuring website traffic. Due to AMP CDN you won’t be able to rely on counting server requests to measure traffic. Instead you will need to find other methods to track users to get a real view of CTRs; it will be a bit more tricky to measure engagement on the AMP version of your site.

Another point to keep in mind is the user experience. Because AMP is stripped-down HTML you will not be able to deliver some content types – think about images that rotate, or a map that can be navigated. UX-heavy parts of your site will need to be re-built for AMP, so you’re running two sites in essence.

Finally, site owners should also beware of the fact that the nature of AMP means that users are more likely to head back to search results after they view your page, rather than engage further with your site. This has a negative impact on engagement and conversion.

Using the opportunities AMP provides

AMP has its drawbacks but many site owners will benefit from looking into the mobile traffic benefits that AMP can bring. You can start off by building an AMP version of your site so that you can feature more prominently in mobile web searches. Next, consider developing ways to easily route mobile users who land on an AMP page straight to your mobile app. Do this and you’ll mitigate the loss of engagement while also getting access to improved analytics. It’s also worth trying to get a full view of how your user journeys from AMP content to your app or to your website. Do AMP users convert? It’s worth trying to find this out as you experiment with an AMP version of your site.

REST – All You Have To Know About Representational State Transfer

REST

REST, short for representational state transfer, is a type of software architecture that was designed to ensure interoperability between different Internet computer systems. REST works by putting in place very strict constraints for the development of web services. Services that conform to the REST architecture can more easily communicate with one another.

RESTful services can request and edit text version of a web resource via a predefined set of operations that are uniform – and stateless. REST is just one type of architecture – SOAP web services is another example, and each of these architectures will have their own specific rules and operations.

The roots of REST

The original definition of a web resources was basically a document or a file that can be accessed using its URL. However as the web became more complex the term “web resource” has been expanded. Today, a web resource is essentially anything that can be found, given a name, reached or manipulated on the web.

When a RESTful web services requests a resource via it’s URI it will get a response that contains a “payload”. This payload is formatted in one of a number of set formats ranging from HTML and XML through to JSON. A response to a RESTful request can also state that something about the resource has changed, while it can also deliver hypertext links across to resources that are related to the requested resource – or indeed entire collections. When using HTTP (which is usually the method of choice) the developer has access to a range of operations including POST, GET, HEAD and even DELETE and TRACE.

REST developers make use of standard operations alongside a stateless protocol. The resulting RESTful system is more reliable, operates faster and also gives developers the option to re-use REST components. These components can be updated or managed on-the-go, doing so does not affect the entire underlying system – you can change RESTful system components while the system is running.

Roy Fielding developed REST in 2000 as part of his doctorate. His dissertation referred to REST principles in the context of the HTTP object model which dates from 1994. These were used when developing HTTP 1.1., and also while developing the standards around URI  – the uniform resource identifier.

Why the name REST?

As you can imagine, REST as a term was used on purpose, it was supposed to make developers reflect on how web applications should behave when they are properly designed.  Such an app should act as part of a network of web resources (also called a virtual “state-machine”). Within this network the user is able to step through their app by picking resource identifiers (a specific page or URL) and by then applying a HTTP operation such as POST. These operations are also called the “applicable” state transitions. The representation of the resource is then sent to the end user – this is the next application “state”.

The 2000 PhD dissertation, and what Fielding said

Roy Fielding, a student a the University of California at Irvine, did a PHD dissertation in 2000 which was called “Architectural Styles and the Design of Network-based Software Architectures”. While his PHD dissertation was published in 2000 he has in fact been working on REST for a number of years, he was arguably working on REST in parallel with HTTP 1.1 – HTTP 1.1. was being developed between 1996 and 1999 and was of course based on the HTTP 1.0 design of 1996.

In 2009, when looking back at how REST was developed, Fielding said the he was involved in the HTTP standardization process, and he was often asked to defend many of the choices made when designing how the Web works. Fielding outlined how challenging it was to do that given the fact that anyone could submit a suggestion on how the Web should work, and given that the Web was becoming absolutely central to an industry that was growing at breakneck pace.

Fielding says that at the time he had over five hundred developers commenting on the develop of the Web. Many of these engineers were highly experienced. It was part of Fielding’s job to explain all of the Web’s concepts from the ground up. Fielding says that the process of doing this helped him to settle down on some important principles. It also helped outline the constraints and properties that are now part of the REST architecture.

What are they key architectural principles of the REST architecture?

We explained earlier how REST works in a large part due to the constraints it places on web architecture. These constraints affect web architecture in these ways:

  • By placing requirements on how quickly component interactions are performed – a key determinant in the way users perceive network performance, and a key factor in overall network efficiency.
  • By enforcing scalability – in other words, so that large numbers of components are supported, alongside interactions between these components.
  • Demanding a simple, uniform interface
  • Ensuring that components can be easily modified when user requirements change, even while the app is running
  • Making sure that the communication between service agents and components are totally visible
  • Ensuring that components are portable as the code behind components can be moved with data intact
  • Finally, REST demands that the entire system is resilient, no single connector or component failure should result in the entire system collapsing

It’s worth noting that Fielding has some specific comments around the scalability aspect of the REST architecture. Fielding referred to a unique characteristic of REST: the separation of concerns, assuring that components serve distinct purposes. This fact makes it easier to  implement components, makes connectors simpler while also making it easier to tune applications for improved performance. Overall, server components become more scalable. Because system constraints are layered under REST it means that any intermediary such as a firewall or a gateway can be added into the application without the need to change the way components interface with one another. These intermediaries can therefore easily assist in the translation of communications – or by improving performance, Fielding pointed to caches that are large-scale and shared, for example.

Fielding said that, overall, REST makes intermediate processing easier. It does this by making sure that the messages sent by REST are self-descriptive. REST also ensure that interactions are stateless in between each request, while Fielding also pointed to the fact that REST imposes standard methods and standard media types when exchanging information. REST responses are also cache friendly.

The architectural constraints used by REST

Looking beyond architectural principles, it’s important to understand the constraints that define which systems quality as a RESTful system. There are six, and they are designed to limit the way servers respond to and process client requests. By imposing these limits, the system gains positive characteristics. Such a system performs faster and is simplified – it’s more scalable and also portable. At the same time a RESTful system is easier to modify while also being more visible. Overall, a RESTful system is more reliable too – thanks to the formal REST constraints:

Architecture based on a client-server approach

We touched on this point before: the separation of concerns. For example, with this approach the user interface tasks (or “concerns”) will be kept separate from the data storage tasks (“concerns”). In doing so the user interface is more portable and can be carried across to a different platform.

At the same time the system is also more scalable as the server components are simplified. In the context of the web, it’s worth noting that this separation means that distinct web components can grow and evolve on their own. It supports operations at inter-scale, a requirement where there are multiple organizational domains.

REST is stateless

In an important constraint, no client context may be stored on a server between client-server communications requests. The entire request must contain all the information needed to fully answer the request, and the client itself holds the session state. However, the session state can be transferred to, say, a database so that the state persists for a period of time. This can enable authentication for example.

Once the client is ready to transition to the next state it can start sending a request. Clients are in transition when there are no requests that have not been answered. When a client wants to start another state transition it can make use of the links which are contained in the representation of application states.

Content can be cached

When looking at RESTful architecture, both clients and tools that work as an intermediary are able to cache the responses to requests. However, every response must classify itself as either non-cacheable, or that it is OK to cache – this is to make sure that data that is not appropriate for a cache, or data that is stale, is not cached. Thanks to caching there are fewer interactions between client and server which means that an app performance better, and that apps are more scalable.

Layered architecture

Clients usually cannot tell whether an established connection contains an intermediary, or whether that connection is made directly with the server. For example, where load balancers or proxies are acting as intermediaries these won’t cause any issues with communications. Nor, under layered architecture, will using a proxy or load balancer require the need to edit code either on the client or on the server.

Servers that act as intermediaries can help to scale apps by working as a shared cache, or as a load balancer. Security is another benefit of the layered approach as a security layer can be added on top of a web app – and in doing so the security logic is running separately from the application logic. Developers can enforce security policies using this method. Also note that the layered approach means that a server can send a call to several servers in order to deliver a response to a client.

Code-on-demand

Finally, servers can hand code execution responsibilities to clients by sending code to a client – for example, a server can send java applets or JavaScript to a client so that the code is executed on the client side.

Uniform interfaces

Perhaps one of the most fundamental aspects of RESTful systems is the uniform interface, a way to decouple and simplify application architecture so that each component can develop and change on its own. There are four, further constraints involved in uniform interfaces:

  • Requests must contain resource identifiers. As an example, a URI can be contained in a request to enable the resource to be identified. Note that, conceptually, resources are distinct from the results that are returned to a client. For example, no matter what the server’s own method of storing data is, the data sent in answer to a request could be sent as anything from XML to HTML or even JSON.
  • Representations can manipulate resources. Clients might hold a representation of a specific resource – and this could contain metadata. This information is sufficient for the client to be able to change the resource, or to delete the resource.
  • Messages are self-descriptive. Every message sent under a RESTful architecture will contain sufficient information for the recipient to be able to process the message – as an example, a message can use a media type to specify which parser should be used.
  • HATEOAS. Short for “hypermedia as the engine of the application state”, this principle is just like a human having access to the home page of a specific website. The client should then have the option to make use of links provided by the server so that it can automatically understand which activities and resources it will require.

A few other points to note about REST

SOAP has specific standards that define SOAP web services. But in the case of RESTful applications there are is no officially-issued standards documentation. In a way you can see REST as a style of web architecture, rather than a specification or a protocol.

So, REST is not a standard but implementations that would objectively qualify as “RESTful” are very dependent on standards – indeed, they are standardised to a degree ,using XML, HTTP and JSON. Note that developers sometimes describe the APIs that they create as complying with REST architecture even if these APIs sometime skirt around the constraints imposed by RESTful architecture.

How to manually remove website malware

Remove website malware

We all face daily cybersecurity challenges. No matter how hard you try, you’ll never reduce the chances of being hacked to zero. But server security solutions are here to help prevent and detect unauthorized access. Do you need help learning how to remove website malware?

There are always comfortable automated ways to manage these threats, like one of our most appreciated extensions for this purpose, ImunifyAV.

Alternatively, let us help you get one step ahead of the hackers with our guide to manually removing website malware.

File with malware

Main malware strains

Main malware strains

Hackers can get into your systems in various ways. One popular way is via injections attacks. Injections happen when an attacker inserts a file, in-memory cache or database entry into a system component.

Code injection

  • You can insert code into existing PHP or Perl programs to create backdoors or automated uploaders.
  • You can modify the contents of the .htaccess file to redirect visitors to other sites for the purpose of phishing or SEO hijacking.
  • You can alter JavaScript (.js) and HTML files to insert unwanted advertising scripts or content (so-called malvertising).
  • An attacker can modify and use Exif information (meta-data to add info to image files eg. JPG) to carry malicious payloads to other parts of the file system or other sites.

Hackers will often take full advantage of their position, and plant malicious code in multiple places.

Cache injection

A cache is a small, high-performance store of memory. If you don’t secure the server that maintains the caches, then memory can be overwritten in situ. If the affected portion of memory is a cached version of a web page, then a hacker can inject code or malicious content without changing website functionality.

Hacker scripts

Hacker scripts can take many forms, and serve many purposes. Scripts for back doors, uploaders, spammers, and phishing links can create web doorways, or site entry points to manipulate search engine indexes. Hackers can also create defacement scripts simply to cause damage, or prop up their own ego.

Replacing system components

Every hacker wants root access to your server, so they can replace any web server component with their own malicious version. Attackers can control entire sites, and add or modify their behavior as they need. They can also remotely control the script to issue redirects or new portions of malicious code. If an attacker hides this component carefully, then it’s difficult to detect. Because the website appears to be working normally.

How to manually remove malware and repair your website

Manually removing malware

Now let’s assume you’re scanning your site with your favorite cybersecurity software, like Imunify360 or ImunifyAV. Use the following manual inspection techniques to make sure it’s doing a good job and start to manually remove malware.

IMPORTANT: Before continuing, ensure you have a full and working backup of your entire system.

File scanning

Traditionally, Linux-type systems have limited facilities for detailed file scanning and inspection. So let’s use what we have, in the form of find and grep. First, by searching the file system for all modified files within the past 7 days, where the file name extension begins with ph (to cover .php and .phtml):

find . -name '*.ph*' -mtime -7

However, what if a hacker considers this first? And resets file modification dates. Then check to see if file attributes have changed. Here’s how to do that for .phtml and .php files.

find . -name '*.ph*' -ctime -7

We can narrow down the period we’re looking at, by using the newermt option of find. Eg. To look for a file changed between the 25th and 30th of January 2019:

find . -name '*.ph*' -newermt 2019-01-25 ! -newermt 2019-01-30 -ls

Now we can introduce the grep command. This can recursively scan for and report patterns in files. Eg. To look for a portion of a URL in any file in the current directory, or any within it:

grep -ril 'example.com/google-analytics/jquery-1.6.5.min.js' *

Permissions checks

If you suspect a breach in your web server or file system, check file permissions. You can do this with the following command:

sudo find / -perm -4000 -o -perm -2000

Check for active processes

If a file system scan shows nothing unusual, take a look at what’s running on the system. See what PHP scripts are running using:

lsof +r 1 -p `ps axww | grep httpd | grep -v grep | awk '{ if(!str) { str=$1 } else { str=str","}} END{print str}'` | grep vhosts | grep php

Analyzing malicious code: what to look for

You now know some of the basic techniques to search for files and file content. To go deeper when you manually remove site malware, you need to know what to look for. Here’s a helpful checklist.

Check rarely visited directories

System administrators rarely look in directories like upload, cache, tmp, backup, log, and images, making them ideal locations for hackers to hide malicious files.

Note: On PHP-based CMSes such as Joomla, check directories for .php files in the wrong places. If you’re on a WordPress site, check the wp-content/uploads, and the backup and theme cache directories.

Here’s an example of a command that checks for PHP files in an images folder:

find ./images -name '*.ph*'

Treat any similar files in such places suspiciously.

Files with strange names

Even though file names come in a wide variety, certain names should raise a red flag. Here are some examples:

  • php (no extension)
  • fyi.php
  • n2fd2.php

Note any unusual patterns or combinations in file names, letters, symbols and numbers. File names that are naturally unreadable are:

  • srrfwz.php
  • ath.php
  • kirill.php
  • b374k.php.php (double extension)
  • tryag.php

Hackers also exploit the habit of some programs that append numbers to copies of existing files. So lookout for files like:

  • index9.php
  • wp3-login.php

Look for unusual file name extensions

You don’t normally associate certain file name extensions with CMSes like WordPress. So if you see any of these, take note:

  • .py (Python code extension)
  • .rb (Ruby code extension)
  • .pl (Perl code extension)
  • .cgi (CGI code extension)
  • .so (Shared object extension)
  • .c (C source code extension)

Moreover, you also wouldn’t expect to find files with extensions like .phtml or .php3. If you discover any of the above on a PHP-based CMS website, then you should inspect it closely.

Look for non-standard attributes and creation dates on files

Another sign of suspicious files involves the file owner attribute. So you need to watch out for the following:

If you see a number of .php files sent to a server via ftp or sftp were transferred with the owner attribute set to myuser. But in the same directory you see files where the owner attribute is www-data.

You must also check script creation dates. If the date is earlier than website creation, then you need to be suspicious.

Look for large numbers of files

Directories containing hundreds or thousands of files are good places for a hacker to hide malicious scripts and payloads. Such large numbers of files indicate a doorway, or a form of blackhat SEO.

You can detect such directories with the find command. We recommend you start in a specific directory to limit your search and avoid loading a system. The following example helps you find the top 25 directories with the largest number of files.

find ./ -xdev -type d -print0 | while IFS= read -d '' dir; do echo "$(find "$dir" -maxdepth 1 -print0 | grep -zc .) $dir"; done | sort -rn | head -25

(You can read more about file (inode) searching at StackExchange.)

Checking your server logs

Check server logs

You can also check any system through an inspection of the server log files. Here you can learn many things. For example:

  • You can tell how the spam email was sent (when and where it was sent from, the access_log file, and what script invoked the mail command).
  • You can check FTP logging. Tools such as xferlog tell you what was uploaded or changed, and who did it.
  • You can discover the location of any mail-sending PHP scripts with the correct configuration of your mail and PHP servers.
  • You can check to see whether your CMS has additional logs to help you track down the source of an attack. This might help you determine whether an attack was external or came in via a CMS plugin.

Both access_log and error_log files are good sources of information. If you know which scripts are the attack vectors, you may be able to find the source IP address, or the HTTP user agent value. You may also be able to see if a POST request was made at the same time of the attack.

Checking the integrity of files

You deal with attacks more easily if you have adequate preparations in place, like recording the state of files in their pristine state. You can then compare them to the same files after an attack. You can do this in various ways:

Use source code control systems such as git, SVN or CVS. In the case of git, you can simply utilize these commands:

git status 

git diff

Using source code control ensures you have a backup copy of server files. You can restore these easily in the event of a cyber attack.

Tools that can alert you when anything on a file system changes include:

In some cases, version control isn’t possible. For example, when using shared hosting. One workaround is to use CMS extensions or plugins to monitor file changes. Some CMSes even have their own built-in file integrity.

You can keep track of what files you have at any one time with the command to catalog all the files on a system:

ls -lahR > original_file.txt

You can compare this file later with a fresher copy using comparison tools like WinDiff, AraxisMerge Tool, BeyondCompare, the Linux diff command, or even compare snapshots online. This lets you see what files have been added or removed.

This whole process certainly looks pretty complex. You can always choose to fully automatize it – using for this purpose ImunifyAV.

Comfortable Alternative to a Day’s Work – ImunifyAV

ImunifyAV

For added confidence, it’s good to know how to manually check your system for problems. And it’s a good way to learn some system administration techniques, like how to manually remove malware. Having a comprehensive server security solution such as ImunifyAV, a free antivirus and anti-malware scanner, is the first step towards a safe and secure website. You can easily upgrade to ImunifyAV+ and get a built-in, one-click, fully automated cleanup feature.

Next Level Ops Podcast: Plesk’s Lukas Hertig Goes Down Memory Lane with Web Hosting

Hello Pleskians! Over the last couple of months we have been busy in the studio preparing the Official Plesk Podcast: Next Level Ops. This week, our Superhost Joe Casabona sits down with Lukas Hertig, a fellow Pleskian of the Highest Order, to discuss 20 years of Plesk and the changing web hosting landscape.

Web Hosting Was Another World Back Then

Retro Computer

It’s the very first episode of the Official Plesk Podcast: Next Level Ops. Joe and Lukas go down memory lane with 20 years of web hosting. The podcast kicks off, Lukas introduces himself and his relationship with Plesk. It’s been a long one, 15 years. Joe takes a deep dive right into history. What did web hosting look like 20 years ago? The two chuckle. Websites were hard to set up and looked ugly. Do Dreamweaver, Frontpage and Geocities ring a bell? “It was like the wild, wild west of websites and hosting.”, says Lukas. Joe solemnly agrees.

“It was like the wild, wild west of websites and hosting.”

Lukas

Key Takeaways

  • Web hosting was much more complicated in 2000 – there was a lot of command line stuff! There were few dynamic websites, a lot of free hosts such as Geocities, Tripod and Angel Fire. But overall, there was much less compliance.
  • The biggest changes in the last years include some gamechangers. Such as the rise of WordPress, Node.js and Ruby. The cloud has changed a lot too, for instance, with services like AWS. And what of performance changes? It used to be simple caching but today the complexity is higher.
  • The future – will it even have web hosting? Possibly there is no web hosting at all. “The Platform”, such as Shopify and Wix will be more important. Technology cycles are already getting shorter, disruptions are happening faster. Hopefully, at least DNS should get better.

…Well, what are you waiting for? Join Joe and Lukas as they take you through the magical transformation of the web hosting landscape. Get ready to stream our first ever Plesk podcast. You can also go directly to Plesk Podcast: Next Level Ops on Simplecast.

The Official Plesk Podcast: Next Level Ops Featuring

Joe Casabona

Joe is a college-accredited course developer. He is the founder of Creator Courses.

Lukas Hertig

Lukas is the SVP Business Development & Strategic Alliances at Plesk.

Remember to update your daily podcast playlist with Next Level Ops. And stay on the lookout for our next episode.