No web browser is identical to another one. Intuitively we all tend to think all web browsers are basically the same –especially the popular ones; however, their architecture, user interface design, standards compliance, etc., may be so different between each that they won’t necessarily display any website in the same way. Hence the necessity to test our websites on different web browsers.
There’s a high probability that you have a favorite web browser and that you also use it for working on your everyday tasks or any other browsing needs. What’s your tool of trade? Chrome? Or maybe, Firefox, Safari, or IE. Any of those will render web pages in a sensibly different way.
When we’re centered in our web browser of choice, it’s easy to lose perspective of the different internal mechanisms by which each of them work, but, whenever we happen to switch over to some other web browser, looking a little closer, we starts to grasp the notion that they’re indeed built quite differently and therefore present different functional aspects and offer different features, and we may be surprised by those indeed. The layout of our most known pages might not look the same as we’re used to, and even specific features may not behave as expected, if they’re available at all.
But we have a way out of this problem, and that is testing our website in different browsers.
If our website is already online, we can take advantage of that and make way better plans by getting to know what our target audience is, before starting any testing. We want to know what browsers and respective versions they’re using, on what OS platform, etc. Tools like Google Analytics, Open Web Analytics or Adobe Analytics or any other analogous can help us gain useful insights about our website audience. This sort of data will help us maximize our efforts and design better targeted software, and hence, more specific tests.
A browser matrix consists of the list of browsers (and respective versions) and OSs on which your website/app will be tested. This approach will help us discard browsers which may not render certain features or functionalities of our website, and therefore perform quicker tests. This is a plus to the web analytics tools usage, since this way we can prioritize the browsers and OSs on which we’ll focus our testing. We will choose –no wonder why– the most popular ones. The pattern is clear: real world data feeds our application and test design. A browser matrix will also be an important part of a well systematized testing plan, resulting in a well devised testing process.
After we extract all that data from the audience analysis, we’re then ready to start finding what will be the most supported browsers for your website by creating a compatibility matrix.
Browsers will fall into a certain set of categories:
A: Fully supported and preferred browser.
B: Fully supported but not as preferred browser.
C: Partially supported but preferred browser.
D: Partially supported and not a preferred browser.
E: Unsupported but preferred browser.
F: Unsupported and non-preferred browser.
Our analytics results will yield us the traffic and conversion rates. This will help us relate both properties and decide if a certain browser will be included in the support list.
An example of a traffic/conversion table can be something like this:*
Once we get to know which will be the best supported browsers for our website/app, we can then proceed to ideate a test strategy.
At first, the thought of performing such a task as testing a website or web application for different web browsers may strike us as being somewhat overwhelming, but, far from it, what it really requires is only a careful, thorough plan which will lead us into a well strategized testing that will hopefully spare us from undesired problems. Needless to say, when we’re embarked on a large project, we will need to perform testing with such regularity that we can assure us new features will work for our target audience and old features remain functionally stable and don’t break when new additions are made to the code.
Dynamics is key. Running our tests along the development process will be a much better business than if we leave that task for the end. We will not only save lots of precious time (and money) but we’ll also be able to find and fix bugs earlier.
We can ideate a workflow for testing and bug fixing which we can break down into a certain set of phases. For example:
1) Initial Planning > 2) Development > 3) Testing/Discovery > 4) Bug fixes/Iteration
We’ll want to reiterate steps 2 to 4 the necessary times it takes to get all of the implementations on track. Let’s take a look at these phases more in detail:
The first thing is to figure out what the look and feel of our website will be; that is, the content it will have and the functionality at an UI level, etc. Assessing the available time for our deadline will be highly important in this phase, as well as determining the set of functionalities which will feature and the technologies which will be implemented.
Divide and conquer, says the motto, so an intelligent idea to follow once we get to this phase, is to divide our website or web app into modules, i.e.: home page, catalog, shopping cart, payment flow, etc, and the respective subdivisions which any of those components may imply.
The whole of functionalities will desirably be working in all target browsers. We may need to implement different code paths to render certain functionalities in different browsers in order to achieve the widest support scope possible, however, there is a point in which we need to, either accept that certain functionalities will not work the same way on all browsers, and provide reasonable solutions for browsers that don’t support the full functionality, or, accept the fact that your website/web app is simply not going to work in certain (older? less popular?) browsers. These are generally acceptable resolutions, given the user base conforms to them.
In this phase we focus on new functionalities. First, we must check there are no blocking issues in our code which may impede a certain feature to work.
We can begin by doing testing in some stable, popular browsers, i.e.: Firefox, Safari, Chrome, IE, etc. Then we can move on and perform a few lo-fi accessibility tests, like browsing without the assistance of a mouse, or using a screen reader to assess the browsability for visually impaired people. We would of course like to test also on mobile platforms like Android or iOS.
Testing should encompass the most popular desktop OS platforms (Windows, macOS and Linux distributions like Ubuntu or Red Hat) for ALL browsers, popular or not. Then we’ll want to enlarge the list of browsers. We may include –not so popular– browsers such as Opera, Vivaldi, Konqueror, etc. We’ll try to test on real, physical devices wherever possible. Emulators and/or virtual machines will come handy when we happen to lack the means to perform testing with all the desired or required combinations of OSs/browsers/versions.
Alternatives are, for example, user groups; this is, asking people from outside of the development team to test our website on certain OS platforms and browsers.
And, last but not least, we can make a very smart move by implementing automation tools. Automation tools, besides saving us a lot of time, can help us deal with repetitive tasks when the components we need reach a certain level of stability.
There are free, open source tools, and commercial tools. Practically all of them can help us automate most of our sets and suites, be them regression or end-to-end tests. We’ll expand on this topic later.
Before even intending to fix a newly found bug, what we need to do is to isolate it to the point we can pretty much exactly can identify where and how it happened.
We’ll begin by gathering all the information we can get from it in terms of user flow, app component, platform, device, browser version, etc., then try to reproduce it on several other configurations and combinations of those, for example, same browser/version, different OS; different browser/version, same OS, etc., and include all behavior details in the report which will be used by the developers to fix the very bug.
After we’ve done all that, and the bug is finally fixed, we’ll reiterate the whole process to make sure the fix actually solved the issue and is not causing breakages in other parts of the code.
What are the Best Automation Testing Tools Available? This is quite a tricky question. The truth is, different tools available in the market have their own pros and cons, and, in most cases, the pricing is not transparent. We would recommend you to do some research and look at a few lists online, avoid companies that do not have any pricing available and go with companies that are not simply tools, but have real humans and outstanding customer success behind them to support your QA team.
In our case, we are a full suite for testing where you can:
We have a long-term roadmap traced in order to expand Autify’s capabilities and product lineup, which consists in three phases:
Autify is for you if:
You can see our client’s success stories here: https://nocode.autify.com/why-autify
We have different pricing options available in our plans: https://nocode.autify.com/pricing
All plans invariably offer unlimited testing of apps and number of users.
Give our Free Trial a test!
* https://www.lambdatest.com/blog/creating-browser-compatibility-matrix-for-testing-workflow