Search Our Site

Our Newsletter

Our Ramblings

A look at some useful PHP functions. isset() vs empty() vs is_null()

Were currently developing a new system in collaboration with our partners at 802 Works – Redefining Wireless. As a part of that, were designing a new system for administering Wireless systems and maximising the marketing leverage that can be gained from them. That job involves having to write quite a lot of PHP pages and scripts. We thought we’d share a short tip on PHP variable declaration.

We’re going to look at the different ways that variables can be empty in this post. PHP has many different operators which can be used to test a variable. Three useful operators used for this are isset(), empty() and is_null(). Each of these operators return a boolean value (either true or false).

Lets take a look at these operators in a little more detail:


From the bible of PHP, the PHP manual, isset’s job is to “determine if a variable is set and is not NULL”. In other words, it returns true only when the variable is not null.


Again from the PHP manual, empty’s job is to “determine whether a variable is empty”. In other words, it will return true if the variable is an empty string, false, array(), NULL, “0?, 0, and an unset variable.


Finally, from the PHP manual, is_null’s jobis to “find whether a variable is NULL”. In other words, it returns true only when the variable is null.

You may now be thinking that is_null() is opposite of isset() and , broadly speaking you’d be correct however there is one difference and that is that isset() can be applied to unknown variables, but is_null() only to declared variables.

The table below is an easy reference for what these functions will return for different values.


Value of variable ($var) isset($var) empty($var) is_null($var)
“” (an empty string) bool(true) bool(true)  bool(false)
” ” (space) bool(true)  bool(false)  bool(false)
FALSE bool(true) bool(true)  bool(false)
TRUE bool(true)  bool(false)  bool(false)
array() (an empty array) bool(true) bool(true)  bool(false)
NULL  bool(false) bool(true) bool(true)
“0” (0 as a string) bool(true) bool(true)  bool(false)
0 (0 as an integer) bool(true) bool(true)  bool(false)
0.0 (0 as a float) bool(true) bool(true)  bool(false)
var $var; (a variable declared, but without a value)  bool(false) bool(true) bool(true)
NULL byte (“\ 0”) bool(true)  bool(false)  bool(false)


I have tested the above values in PHP 7.1.9 which was released on September 1, 2017.

Which website shopping cart?

We have recently undertaken a study to determine which of the many shopping cart systems we should use on a customers website. After a long process trawling through the myriad of options we finally reached a shortlist of 8 candidates.

These candidates were:

  • Avactis
  • CS Cart
  • Cube Cart
  • Magento
  • OS Commerce
  • Prestashop
  • Virtuemart
  • Zen Cart

Now, with that part done the hard work begins.

The fact of the matter is that all of the choices in the list above are great ones. Any of these shopping carts will, with the right implementation, produce an excellent level of functionality on any website. The trick is to understand your own requirements first and identify which of the options most closely fits your own requirements, not only today but also your anticipation of what they will be in 6 months, 1 year and possibly even more.

So lets look at the pros and cons of each.

You can find the comparison HERE.


Five Steps on the Journey from Virtualization to Private Cloud

Agility is one of the primary benefits of a virtualized environment, and as a result, many organizations virtualize with the hope of improving their IT infrastructure’s ability to provide business value. Having completed a number of P2V transitions, they are able to achieve some cost savings, but find agility more difficult than originally anticipated. Private cloud application deployment is one example of how virtualization can improve the data center’s agility and contribute to business success. Here are what we believe to be the five key concepts for harnessing the potential of private clouds.

1. Pair Virtualization with Management

One of the most critical aspects of the virtualization process is management. IDC surveys found that the companies with significant virtualization rollout are far more likely to name management as a critical component than those just beginning to virtualize. This disparity comes from a measurable lag between adoption of new technology and the adoption of new management techniques. In many cases, virtualization management is underdeveloped relative to areas such as network, storage, and systems. In particular, four areas of virtualization management have a major impact on the success of virtualization projects.

* Configuration management: As your virtual environment grows in size and complexity, maintaining and monitoring application configurations, changes, and interdependencies becomes increasingly difficult. Good management software can ensure effective configuration management, improving application deployment by up to 96 times.
* Capacity planning and VM placement. As your virtualized resource pool grows in size, you will need tools that analyze capacity trends and optimize where your VMs run to minimize hardware footprint, thereby maximizing the potential for server consolidation. The results are impressive: reduce power usage by up to 16% in order to save £700,000 annually for a 5 megawatt data center.
* Performance monitoring. As resource allocation problems threaten to affect user experience when interacting with applications on virtual systems, your administrators need VM-aware performance monitoring tools that can help pinpoint issues. These tools can reduce service failures and achieve 24x faster MTTR and uptime to ‘5 nines.’
* Real-time automation. As the tasks needed to maintain your virtual environment grow increasingly challenging, you’ll need tools that can perform real-time automation by adjusting virtual or physical infrastructure to compensate for failures. Success can increase staff efficiency by 10 to 270% and improve deployment time by 240x, with combined savings of up to £1200 per server.

2. You Can’t Just Use the Same Old Tools

One important take away from the transition to virtual environments is the need for a different toolset than the one employed in a physical setting. In order to save on software purchases or simply out of habit, many organizations attempt to use physical tools in the virtual space. Though the virtual environment requires provisioning, change management, and root-cause analysis just like in physical systems, the mechanisms used to accomplish them are very different. Attempting to use physical tools in virtualized environment will, at best, leave some of the benefits of virtualization on the table, and at worst make effective management of your environment impossible. For example, provisioning in the physical space is a manual, server-focused, and highly coordinated activity performed by system admins. In the virtual space, provisioning is a dynamic process that is often self-service and highly automated. Using tools that require manual provisioning severely restricts the agility benefits of virtualization. The following are similar examples of how tasks in the virtual space change dramatically from their physical counterparts:

3. Integrate Lifecycle Management Disciplines

In addition to requiring new tools and work flows, virtualization makes management more complex by blurring the lines between different areas of traditional IT management.  It also creates a series of individual difficulties.  A common but problematic response is to look for point solutions to each individual area of lifecycle management.  These products add additional complexity and siloing to the virtualized environment by increasing the number of tools needed to manage it.  Point solutions lack the integration across multiple disciplines that is necessary to succeed in the inherently integrated virtualized world.  For example, capacity management needs to integrate with performance management in order to prevent over-provisioning.  Point solutions also increases risk as smaller companies may become unavailable at a moment’s notice if the vendor goes out of business.  Management software should span all areas of lifecycle management in order to reduce complexity in the data center and prevent further organizational difficulties.

4. Integrate Management Across Domains

One of key aspects of successful automation implementations, and staff cost savings by extension, is the integration of management tools across all domains within the organization.  If aspects of certain processes, whether electronic or human, are left out of the automation implementation plan, it can create serious problems.  At best, this will hold back the capacities of automation tools which are not able to complete tasks behind their purview.  At worst, it can restrict the visibility of system admins and make root cause analysis and virtual deployment in general more difficult.  Enterprises with integrated tools for physical and virtual systems management are more likely to outperform industry averages in:

  • Achieving SLAs and maintaining uptime (17% more likely)
  • Maximizing server utilization (VMs per server) (16% more likely)
  • Improving VM deployment times (30% more likely)
  • Reducing overall data center power consumption (20% more likely)

5. Virtualization Management + Automation + Service Management = Cloud

The final step in the process of transitioning to private cloud is recognizing that virtualization management, automation, and service management are the key components that make up a private cloud.  Characteristics, rather than specific technology, define private clouds.  Multiple analysts have commented on this fact.  EMA writes: “Automation, Virtualization, and Service Management are fundamental building blocks for Private cloud.” The 451 Group notes that “the ability to automate almost all aspects of infrastructure operation is absolutely necessary to meet many of the cloud’s requirements.” Ultimately, the success or failure of private cloud ventures depends on these core areas.  Some variety of point solutions may allow for a precarious but successful transition to private cloud.  However, with a full-fledged, cross-discipline, cross-platform management solution, organizations can reap the full cost, agility, and continuity benefits that private cloud computing is capable of providing.

Ultimately it is too easy to treat private cloud as a specific and distinct technology instead of a set of functionality within your existing infrastructure.  As a result, analyzing your current infrastructure and processes with an eye towards an integrated and automated management solution is the most important step.  Rustyice Solutions can help to provide you with an end-to-end virtual management solution.  Our processes are designed to properly link your transition plan to core components of a dynamic data center, which in turn is the basis for all private clouds. Our suite of integrated transition methodologies specifically address the core areas of private clouds with the depth, quality, and understanding to fully harness the benefits of the cloud. Call us today to learn more about how Rustyice Solutions can make the cloud work for you.

Recent Developments in Private Clouds.

While most of the initial interest in cloud computing focused on companies that offered their cloud services for businesses to use as they need, there has recently been a shift towards so-called private clouds. These private clouds leverage an existing data centre infrastructure to provide scalable infrastructure throughout the business, at the times it is needed.

Nimbula is the latest company to release their software to the public, currently in the form of an open public beta trial. Their newly launched proprietary cloud operating system, Nimbula Director, aims to mimic the Amazon EC2 public infrastructure cloud within the corporate data centre. It is also designed to fully automate the process of creating the private cloud by exposing a Representational State Transfer (REST) API, which is then accessed through a web console or the command line.

Designed to install on bare-metal servers and supporting both KVM and Xen hypervisors, Nimbula is intended to provide access to computing resources as required, by load-balancing and distributing numerous applications across the entire estate, whilst making the data available resiliently and reliably in multiple different locations.

In addition to Nimbula, there are a couple of other private cloud offerings that are currently available, Eucalyptus and OpenStack, both of which are, at least partially, open-source. OpenStack was created by RackSpace and NASA and is designed to be very scalable, which is the main problem that Eucalyptus suffers from. All three of these products is designed to be built from the ground up and could require lots of time and effort to migrate an existing infrastructure to, so is more likely to be utilized by new systems during the initial design and development phase or as a new hosting platform for an existing application.

An alternative process was conceived by Gridcentric with their Copper application, which is designed to rapidly create copies of existing virtual machines on-the-fly. By continuing to run with existing hardware and software platforms, this method reduces the time taken to realize fully scalable applications, with the benefit of centralized management. This approach is more likely to be popular for applications that need to be scalable for periods of increased demand.

The ability of a business to build its own cloud to its particular requirements is something that is going to become increasingly popular over the next few years. These new products might be the first in the market, but sooner or later, the big players are going to move into this marketplace, either by releasing their own products, or more likely through acquisition or merger.