Frontend Software Testing Top-U p T hesis Softw are D ev elo pm en t, C O S 2 018 Copen hag en B usin ess A cad em y Ju ne 1 st, 2 018 Pag es: n ull Chara cte r c o unt: n ull Auth ors : ? W illia m B ech ?(w b-2 1) ? ?an d ? L uk asz K ozia rsk i ?(lk -1 39) Councillo r: C aro lin e H undak l S im onse nWhat w e n eed ( o th er t h an t e x t) : – UM L d ia g ra m s o f t h e t e st f lo w s – Com para tiv e c h arts f o r t e ch c o m paris o ns – Scre en sh ots o f o ld p ro je ct s tr u ctu re v s n ew o ne – Scre en sh ots o f s o m e t e st c a se s – Scre en sh ots o f h ow w e o rg an is e d o ur t e st f ile s. – Scre en sh ots o f t e st r e su lts – Vis u al r e p re se n ta tio n o f w hat w e h av e t e ste d s o f a r a n d w hat t h e n ex t s te p s a re – Vis u al r e p re se n ta tio n o f t h e e n ti r e P en neo s y ste m a n d w hat p art w e a re t e stin g P la n : Abstr a ct I. Complex ReactJs web applications that have no test frameworks integrated into their development cycle suffer because of a great amount of bugs. ?UI code is particularly vulnerable to bugs because of the combination of API flakiness, random user inputs, and race conditions that make it incredibly easy for logic errors to occur. This leads to poor user experience, escalates support tickets, and slows down the entire development process. ?This paper focuses on choosing the right tools for verifying the behaviour of the code for Penneo’s ReactJs application, implementation of the actual tests, and also investigation of the achieved business values for the company.
Well tested UI minimizes the amount of critical issues, ensures product quality and leads to a better user experience. II. As the power of personal computers has grown exponentially over the years the assignment of processing power in applications has started to spread out from servers. This has led to more processing being done on client machines. Client code becoming more and more complex needs to become increasingly regulated in order for it to perform as wanted.
As competition between tech companies increases, they are now focusing more and more on giving users a seamless experience. A seamless experience arises from having the perfect balance of processing delegation and user experience. This has lead client code testing to become ever more important as in order to accomplish a perfect user experience client frontend code needs to always behave as expected. 1Pre fa ce 50% This report has been written for Top-Up degree in Software Development at the Copenhagen Business Academy.
The report discusses the implementation and technicalities of creating frontend testing environments. The project has been worked on under the supervision of Jan Flora, the CTO of Penneo ApS and Jesus Otero Gomez, the Head of Frontend. The requirements of the projects were set by us, the writers, in accordance with both Mr.
Flora and Mr. Otero. We would like to thank all of the Penneo team for the great opportunity and in particular Ahmad Raja, Jan Flora and Jesus Otero Gomez for their guidance and inspiring us throughout the development of the project. We would also like to thank our councillor Caroline Hundakl Simonsen for… we aren’t sure what to thank you for yet!!! William Bech, Lukasz Koziarski June 1st, 2018 2Tab le o f C onte n ts Abstract 1 Preface 2 Introduction 5 1.1 Penneo 5 1.2 The existing solution 6 1.2 Product 7 1.
3 Thesis 7 Project Establishment 8 2.1 Problem Specification 8 2.2 Requirement Specification 9 2.3 Product establishment 10 2.3.1 Test Focus 10 2.3.2 Test Division 11 2.
3.4 Testing theory applied to ReactJs and Flux 13 3. Planning phase 14 Plan 14 4. Technology 16 4.
1 Unit and component testing 16 3.1.1 Test Framework 16 Setup 17 Features 17 Performance 18 External references 19 3.
1.1 ReactJs Component Traversing 20 3.2 Functional tests (UI Testing) 21 3.2.
1 Test framework 21 3.2.2 Browser automation 23 3.3 Continuous Integration Service 26 3.3 Code Uniformity 26 Coding Standards 27 2.4.
1 Unit tests 27 General standards 27 3Testing components 28 Testing stores 29 Functional tests 29 Methodology 30 Design 31 Unit tests 31 Test Execution 31 Unit tests 31 Functional tests 33 Refactoring 34 Code base restructure 34 React stores 34 Implementation 35 Unit tests 35 Functional tests 35 Conclusion 36 4Chap te r 1 In tr o ductio n This chapter gives an overview of the thesis, the thesis product and the company in which the thesis product has been completed in. 1.1 P en neo Penneo is a Danish software company which provides other companies a digital signature platform. Penneo was founded in 2012 by 3 founders and since then has seen immense growth year over year.
The Penneo headquarters are located in Soborg with 25 employees split between Product Development, Sales and Customer Journey. The core focus of the Penneo is to sign document digitally with the use of electronic identifications (eIDs). This gives a document digitally signed with Penneo as much legal stature as a pen and paper signature. Penneo does not only offer a way of signing documents but also offers a way of creating complex workflows for document signing, a way to manage all of a users Penneo data as well as a way to create forms that are eventually to be signed. Penneo currently has more than 700 customer companies with the majority of them being accountant, lawyer and property administration companies. Penneos product development team is split up into three sub-groups: devops, backend and frontend.
The customer journey team contains a client support team, an onboarding team and customer follow up team. The client support team also occasionally acts as quality assurance (QA) for the Penneo product. The sales team is split up into bookers, employees who call customers in order to book meetings with them, and meeters, employees who go out and conduct meeting with potential customers. We, the authors, have been part of Penneo frontend development team for the last 2 years and have spent the last three months working on creating the thesis product.
1.2 T he e x is tin g s o lu tio n Penneo offers multiple ways of using their platform. The platform can be used through a web application, an API integration and desktop application.
org/ ? ) which allows native applications to be developed using web technologies. The Penneo frontend is created using ReactJs (v14) with a Flux ( ?https://facebook.github.io/flux/ ?) data flow architecture. The frontend development team uses Git and GitHub for code versioning and Travis ( ?https://travis-ci.org/ ? ) for continuous integration. The frontend codebase is split up into 7 different projects which depend upon a single master project to run.
Each project is split up by business case and are each hosted on separate repositories on GitHub. The projects are as follows: ? Fe-application-loader: ? This is the master project that is needed in order to run the other frontend projects. The fe-auth, fe-forms, fe-translations, fe-components and fe-casefiles projects are all linked to this project with the use of npm ( ?https://www.npmjs.com/ ?). This project contains wrapper components such as the applications core headers and sidebars. The project also contains most of the of frontends routing logic.
? Fe-auth: ? Contains all of the logic that has to do with authentication, all of the frontend authentication logic and login and authentication management components are kept here. ? Fe-forms: ? Contains logic to do with the creation of forms. Forms are PDF documents that need to be modified before being signed. The logic comprises of creating editable forms and and editing the created forms. This also contains the frontend for the core business functionality, signing documents. ? Fe-translations: ? This projects holds the logic for Penneos translation module and also contains all of the strings present in the frontend with their respective translations for Danish, Swedish and Norwegian.
? Fe-components: ? Here are all of the pure React components. This is a library in which all of the reusable React components can be taken from. ? Fe-casefiles: ? T his p ro je c t h old t h e f ro nte n d l o gic f o r c re atin g c ase file s. C ase file s a re a b usin ess sp ecif ic w ay o f c re a tin g s ig nin g f lo w s.
? Fe-desktop: ? This is the only project which is not created using ReactJs and not to be linked to the fe-application-loader. It is created using Electron and is where all of the desktop related logic is kept. 61.2 P ro duct The product the report is based on is the creation of two testing environments for the frontend of the Penneo platform. The testing environments have been implemented in order to increase code quality and reliability of the application. These two environments focus on testing different aspects of the frontend. One environment is testing the functionality of the application and the second tests the frontend code units focusing on the implementation of logic.
The created product does not only include tests but also their integration into Penneos existing system. This includes refactoring code that was non-testable as well as integrating the tests into the current development workflow with Git and Travis. The final product also includes a complete restructure of the Penneo frontend code base. 1.3 T hesis Thesis structure 7Chap te r 2 Pro je ct E sta b lis h m en t When establishing the project it had been decided to look at what problems Penneo’s frontend team was facing and how they could be fixed.
A problem was established as something that stopped the frontend team from working on new features and/or what they were supposed to work on. This excludes issues with the surrounding and work environment of developers as a product couldn’t be created to fix such problems. 2.1 P ro ble m S pecif ic atio n The frontend team were faced couple problems, the codebase lacked: – Tests – Solid project structure – Strict coding rules There was also another problem that was not directly code related but the root cause could be brought back to the code, that was context shifts.
Context shifts are when a developer is working on something and is set in the context of what they have to achieve and their train of thought is set, it is then disturbed by having to work on something else. The switch between working on a task and before having completed it to have to work on another task is what defines a context shift. This was deemed to be the biggest problem for the frontend team. After further research context shifts were concluded to happen due to the discovery of bugs in production. This would then lead to developers having to stop working on what they were currently doing and have to switch over to fixing the discovered bug.
So the root cause of context shifts were bugs. It was then possible to bring all of this back to what the code was lacking and it was concluded that in order to achieve a decrease in the amount of bugs released to production tests would have to be introduced to the development cycle. Another related problem was when a developer would have to fix a bug, often the developer would introduce new bugs. The reason being that most bugs arise in contexts that are very business logic dependant. The lack of a way to verify whether a bug has been fixed without breaking any other functionality has often caused a developer to have to come back to the same piece of code to fix a different bug. This problem could also be fixed by having tests.
8Another problem that was discovered was that Git( ?https://git-scm.com/ ? ) pull requests were often forgotten about on GitHub( ?https://en.wikipedia.org/wiki/GitHub ? ). This meant new features that had been completed were in limbo waiting to be merged to the master code. This would then slow down the development cycles uselessly.
This happened due to the structure of the frontend as it had ? ?7 different projects. As every pull request required at least one reviewer to review the newly written code in order for it to be merged with the master code often times developers would have to look through the 7 project repositories in order to find open pull requests. The root cause of this problem was deemed to be the structure of the frontend project. After having identified these problems it was decided that it would be more reasonable in the projects time frame to focus on one problem.
That problem was selected as the one which slowed development the most. The biggest problem was deemed to be the lack of tests thus the product would be the creation of testing environments for the Penneo frontend. 2.2 R eq uir e m en t S pec if ic atio n The main goal of the product was to decrease context shifts by decreasing the number of bugs being released into production.
This meant the decrease in number of discovered bugs and context shifts could be used as a consensus to whether the final product was successful. As a consensus should be able to be quantitatively affirmed it was decided that the consensus would only be based on the number of discovered bugs. This is due to statistics of bug discovery already having been implemented and the number of context shifts not. The consensus would be used as a basis for what the product had to achieve.
It would also be used in order to specify what needed to be tested. Due to the code base being very large and feature rich, implementing tests throughout the entire code base would be an extremely timely task. Focusing on the consensus it was decided that creating tests for new features as well as for the core logic of the application would help achieve the goal best.
The product requirements could not only be set as to what the product had to achieve but also as to how it had to achieved. The requirements for the product implementation are: – Tests have to be easy to write so that developers can focus their time on feature development. – Tests need to be created using relevant and future proof tools so that test setup won’t have to ever, or for a long time coming, be changed.
– Relevant tests have to be readable to QA team. – Functional UI tests need to be written in the Behaviour Driven Development (BDD) ( ?https://dannorth.net/introducing-bdd/ ?) style in order to support collaboration of QA team and developers. 9- Testing framework for functional tests must support browser automation utilities. – Set-up test environment(s) that will automatically run for relevant actions and/or tasks. – Tests have to test the dependability, consistency and reliability of the code. – Most, if not all, tests have to be automated. – Test what the code should do, not how.
2.3 P ro duct e sta b lis h m en t This chapter will explain the theory behind the decisions made in order to establish the product and how the theory affected the product establishment. The chapter will delve specifically into the theory behind testing. This will cover what tests should be written, the distribution of different test types and the theory applied to the current code base. This section should also give a brief overview as to what the selected technologies should fulfill.
2.3 .1 T est F ocu s Before starting to write tests the types of tests that would be most useful for the products had to be decided on. The Agile Testing Quadrants ( ?http://istqbexamcertification.com/what-are-test-pyramid-and-testing-quadrants-in-agile-testing-methodology/ ?) were used in order to help decide what types of test would be most relevant. The Agile Testing Quadrants is a matrix that can be used in order split types of tests and what they affect most. A brief overview of what each quadrant represents; quadrant 1 is focused on testing the code itself, unit tests fall under this quadrant. Quadrant 1 helps support the team and focuses on technology.
Quadrant 2 focuses on testing functionality of the application, it also focuses on code but from a business aspect as it tests the fulfillment of functionality. Functionality and system tests fall under this quadrant. Quadrant 3 and 4 focus on there already being code written and deployed. Quadrant 3 is a manual type of testing and focuses on actual use of the application.
This can be seen as releasing alphas/betas as well as releasing functionality in batches to certain users among other things. Quadrant 4 focuses on non-functional aspects. Looking at the testing quadrants and the consensus for the success of the product it was clear that quadrants 1 and 2 would make the product most successful. Reasons being that Quadrants 3 and 4 are black box tests and that was not going to fix the issues of discovered bugs. This is due to Penneo having a very limited amount manpower to focus on testing as well as black box tests not being as thorough as automated white box tests. As Quadrant 4 focuses on non-functional aspects it would not help make the code base more reliable but more performant. Quadrant 3 would help make the product more reliable but Penneo already has implemented a staging and sandbox environment on which QA can shallowly test the application before new releases. This kind of test has proven to be very time consuming and with only a part-time QA employee these 10tests often prove to not be enough.
Quadrants 1 and 2 would help solve the problems faced as they focus on testing units of code(Quadrant 1) as well as code functionality (Quadrant 2). Quadrant 1 and 2 also fit the requirements as the tests are all automated getting rid of the issue of lack of manpower. ?Reference some literature.
( ?https://www.amazon.com/Foundations-Software-Testing-ISTQB-Certification/dp/1408044056 ?) Fig u re ? : ? T estin g Q uad ra n ts ? ?htt p s:// lis a cris p in .
c o m /2 011/1 1/0 8/u sin g-th e-a g ile -te stin g-q uad ra n ts / With the help of the Agile Testing Quadrants it was able to conclude that the product would have to contain both functional tests as well as unit tests. As the tests that would make up the product had been set it is then vital to ration which of the selected tests help achieve the consensus best. 2.
3 .2 T est D iv is io n The Testing Pyramid ( ?http s:// m artin fo w le r.c o m /a rtic le s/p ra c tic a l- te st- p yra m id .
h tm l ?) (illustrated below, figure ?) is an abstract way of illustrating the division of tests in a project. It illustrates the amount of each type of test a project should optimally contain. It is layered based on the amount of resources it takes to create tests and how isolated the tests are. Isolated in terms of how focused the tests are on a single unit of code. The pyramid is layered with UI tests at the top, service tests in the middle and unit tests at the bottom. Tests located higher up in the pyramid are 11tests which require more resources (CPU and time) to execute and are less isolated, meaning that a larger amount of code units are run to complete the tests.
The top most layer (UI tests) should make up 10% of all tests, the middle layer(service tests) should make up 20% of tests and the lowermost layer(unit tests) 70%. Fig u re ? ? T he T estin g P yra m id ? ?htt p s:// m artin fo w le r.c o m /a rtic le s/p ra c tic al- te st- p yra m id .h tm l Here is a brief explanation of each type of test represented in the pyramid: UI tests: ? These are tests conducted on the user interface of the application. These tests can include checking whether UI elements are placed properly, whether the UI is properly modified after an action takes place, as well as functional tests that start with the UI. These tests are very fragile as they depend directly on the implementation of the UI. Service tests: ? These tests are made up of ?API tests, automated component tests, or acceptance tests.
They usually test the integration of different modules and layers, and do so through the invocation of code and not through the UI. Unit tests: ? The testing of units/components of code. The purpose is to validate that the units of code perform as intended.
A unit is considered as the smallest testable part of software. As it was decided that only unit and functional tests would be conducted for the initial product not all layers of the pyramid are relevant. The service test layer is not used for the product thus making the percentage split of the tests different to the one suggested by the Testing Pyramid. The products functional tests would fall into the UI testing layer and the unit tests to the unit test layer. The reason as to why functional tests are considered part of the UI testing layer is that all functional tests are performed starting from the UI and the test assertions will also be made on 12the UI. The functional tests could be considered as functional UI tests. But since the tests don’t only test the functionality of the UI but rather test whether the system as a whole can provide a functionality it was decided to name the tests functional tests.
After having looked at the Testing Quadrants and the Testing Pyramid the scope of the project had been set. The product would have to: – Consist of unit and component tests. – Consist of functionality tests – Unit and component tests should make up at least 80% percent of the product – Functionality tests should make up no more than 20% of the product. 2.3 .4 T estin g t h eo ry a p pli e d t o R eactJ s a n d F lu x When looking at our ReactJs project it can be hard to locate where exactly one will find unit tests as they could live in multiple places. When using a Flux architecture code is structured into 4 different components: View, Store, Action and Dispatcher. The View is where ReactJs components live, this contains view logic and everything visual.
The Store is where business logic and the state of the application is kept. Action is where callable events are which are usually called from the view the Action then uses the Dispatcher to emit the event to the Store which in turn will change the state of the application. The data flow is illustrated in the image below. Fig u re ? : ? F lu x a rc h ite ctu re d ata f lo w ? ?https://facebook.github.
io/flux/docs/in-depth-overview.html As there is logic in both the stores and views it can be confusing as to which logic the unit tests need to focus on. Using the documentation for Flux there was a section explaining how to test a Flux architecture ( ?https://reactjs.org/blog/2014/09/24/testing-flux-applications.html ? ). Even though the explantation uses an outdated version of Jest the overall ideas can be extracted.
The unit tests will test the functions within the stores as well as the ReactJs components. This is 13because ?ReactJs components are reusable units of code thus making unit tests a natural fit for them. This is why unit tests are mentioned and not component tests even though the tests are conducted on ReactJs components. 14Chap te r 3 Pla n nin g p hase Pla n The project was executed over 20 weeks from 04/01/2018 to 31/05/2018. The plan is illustrated below with the use of a Gantt chart. The chart shows the tasks that make up the project and the intended time to be spent executing each task.
The chart below is a revised version of the initial chart and was recreated in week 3. Fig u re ? : ?Gan tt c h art o f t h e p ro je ct p la n Rese a rch : ?The research task is when all of the background research for the project was to be done. This task would be conducted over 4 weeks.
The research task had been planned in 4 different phases: Week 1: ? Research on how to test frontend projects and ReactJS applications. Week 2: ? Research on the types of tests that are suitable for ReactJS. Week 3: ? Technology research. Week 4: ? Further technology research, specific to selected technologies. Pro b le m D efin it io n : ?This t a sk c o nsis ts o f s e tt in g w hat e x act p ro ble m w as t o b e f ix ed i n P en neo . T his ta sk w ould b e e x ecu te d o ver a w ee k a t t h e s a m e t im e a s r e se arc h s o t h at f u rth er r e se arc h c o uld b e co nducte d o nce t h e p ro ble m h ad b een s e t. T his w ould b e u se d i n o rd er t o s e t t h e c o nse n su s o f t h e p ro duct an d w as s e t w ith t h e h ead o f d ep artm en t.
15Desig n : ? D esig nin g h ow t h e t e stin g e n vir o nm en ts w ould b e s e t u p a n d h ow t h e d ata f lo w w ould w ork w as cre ate d i n w eek s 5 a n d 6 . T he d esig n o f t h e p ro je cts i s d is c u sse d i n c h ap te r ? ? ? . D esig n.
Settin g c o d e s ta n dard s: ? T he c o de s ta n dard s w ere t o b e s e t o ver 2 w eek s w hile t h e p ro je ct d esig n, c o de re fa cto rin g a n d p ro je ct i m ple m en ta tio n w ere b ein g d one. T his w as p la n ned t o b e e x ecu te d o ver s u ch a lo ng p erio d a s i t w as b elie v ed t h at i t w ould b e m ost e ffic ie n t t o s e t t h e s ta n dard s a s t h e i n itia l t e sts w ere bein g c re ate d . Refa cto rin g c o d e f o r p ro d u ct i m ple m en ta tio n : ? T his t a sk w as a d ded a t t h e r e v is io n o f t h e G an tt c h art. This t a sk w ould b e t h e r e fa cto rin g o f t h e c o de b ase s o t h at i t c o uld b e t e sta b le , t h is m ean t r e fa cto rin g ReactJ S s to re s a n d c o m pon en ts a s w ell a s r e str u ctu rin g t h e e n tir e f ro nte n d c o de b ase . T his i s f u rth er dis c u sse d i n c h ap te r ?? ?. Im ple m en ta tio n o f p ro d uct: ? W as t o b e e x ecu te d o ver 1 0 w ee k s i n w eek s 7 t o 1 6. E very th in g w as t o b e se t a n d r e ad y f o r t h e i m ple m en ta tio n o f t h e t e sts .
F or t h e f ir s t 4 w eek s o f t h e i m ple m en ta tio n o nly o ne dev elo per w as t o w ork o n t h e c re ati o n o f t e sts w hile t h e s e co nd d ev elo per w as t o c o ntin ue r e fa cto rin g co de. Rep ort w rit in g: ? 6 w eek s w ere a llo cate d t o t h e w riti n g o f t h e t h esis . 16Chap te r 4 Tech nolo gy This chapter analyzes different tools considered for use use in the creation of the product as well as tools that the product would have to use due to the pre-existing implementation of the Penneo software. When selecting testing tools a couple of things were looked at: its feature set, community, ease of setup, performance and other users experience.
4.1 U nit a n d c o m pon en t t e sti n g When creating unit and component tests for ReactJs a couple of tools are required in order to properly test the code. Firstly a test runner is required in order to be able to execute the written tests with the set configuration. Then an assertion library is required in order to be able to make the test comparisons as well as have nicely printed test results to the console.
In the case of testing ReactJs code a mocking library is also required. This is required as there are many different elements that make up how a ReactJs project communicates and setting up all the elements is not always necessary. Having these elements mocked makes the tests more focused on the unit of code as well as more dependent. Also often times code in ReactJs is only executed due to an event taking place and mocking an element that emits events can make tests more lightweight and faster to execute. 4.
Although when testing ReactJs code one more bit of configuration has to be set and that is to define that Jest is being used within the Babel configuration. Setting up Jest in a ReactJs project takes but a few minutes and tests are ready to be written. On the other hand setting up Mocha takes a little more effort.
As Mocha does not offer everything out of the box external libraries for assertions and mocking have to be used. The tie consuming part is selected the tools but as Mocha is such a largely used testing framework there are plenty of guides online as well as premade Mocha setups that can just be copy pasted into an existing project. Setup is the biggest apparent difference between the two and that is due to Mocha only being a test runner. Having external libraries can be a hassle as in order to setup tests further research has to be conducted in order to know which tools would fit the project best. But, selecting tools separately can also be advantageous as only the tools one wants to use and needs will be included in the testing environment. Jests approach of having everything out of the box makes it less flexible but makes the setup easier.
If wanted one can still configure Jest to use external libraries although the advantages of this are limited as Jest will still have its own libraries making the project heavier and possibly slower. Features Comparing the features of both these testing frameworks can be challenging as they both have completely different scopes, Jest being an all-in-one test framework and Mocha being a test runner. With a fully configured setup Mocha and Jest can both achieve exactly the same features. Even though Jest includes everything so that tests can be run out of the box external libraries cab stukk be added on top of it.
Jest is not limited in any way to use only what it comes with. Mocha does 18the same but actually requires additional libraries. Many of the additional external libraries that can be run with Mocha can also be run with Jest. The possible considerations for having Mocha achieving the same base functionality as Jest are to include Chai ( ?http://www.
chaijs.com/ ? ) as an assertion library and Sinon Js ( ?http://sinonjs.org/ ) for mocking.
When these two libraries are included with Mocha the same base functionality can be achieved. After that both frameworks require additional libraries in order to acquire more features. This makes choosing either Jest or Mocha only based on their features non viable as they can both offer the same features and almost at the same costs. Performance Jest and Mochas performance is very similar when looking at small code bases but as projects get larger and tests become more intricate Jest starts flying ahead of Mocha.
Unlike Mocha, Jest include parallelization. This means that Jest makes the most use of the CPU power by assigning multiple workers with tests and making the workers run the tests in parallel. They do this by having a round-robin approach ( ?https://en.wikipedia.org/wiki/Round-robin_scheduling ? ) which splits up tests by how fast they run and assigning the workers tests that ultimately will take a similar time to complete ( ?https://facebook.github.
In order to figure out test run times Jest initially assumes run time by looking at file size and when tests are run, run time is cached by file. Once the data is collected Jest allocates tests fairly to workers and then runs slower tests first and fast test later. This does however mean that the first time tests are run the run time can be drastically slower than the following times as the data has not been collected yet. According to Facebook's finding this has increased run-time by up to 20% ( ?https://facebook.
For the case of the project there were no apparent advantages of creating an in-house test optimization. Also no Penneo 19employee had any knowledge on how to achieve this and the Penneo project code base not being large enough for this to be a primary concern. External references To help make a final decision as to which framework to use reference to what other companies were doing was made, in particular Airbnb. Airbnb being a pioneer for the development of ReactJs was a perfect example to look up to. They have recently switched over all of their tests from Mocha to Jest. Their primary reason for doing so was due to performance.
As their code base is so large and split in so many files Mocha was not optimizing run time thus making the whole development process much slower. After the switch over to Jest Airbnb has been able to run tests in a little more than ? of the time it took when running with Mocha ( ?https://medium.com/airbnb-engineering/unlocking-test-performance-migrating-from-mocha-to-jest-2796c508ec50 ?). Jest Mocha Setup ? ? Easy, no configuration ? ? Quick ReactJs integration ? ? Not as modifiable ? ? Configuration required ? ? Pre-setup research required ? ?A lot of online guides Features ? ? A lot out of the box ? ? Can use external libraries ? ? If external libraries are wanted become more heavy ? ? ?Only a test runner ? ? A lot of external libraries Performance ? ? Optimized out of the box ? ? Requires optimization ? ? Performance shortage only noticeable on large scale projects ? ? = advantage ? ? = disadvantage ? ? = compromise Figure ?: ? Unit test framework comparative chart After having thoroughly analysed the two tools Jest was concluded to best fit Penneo and the particular requirements of the thesis project.
Another not mentioned reason for selecting Jest is that Jest is backed by Facebook and they have created a team to solely focus on the improvement and optimization of Jest. Having just introduced a completely rewritten version of Jest and having future plans as to how they will improve Jests already stunning performance. This makes 20us confident that Jest will easily be able to create a huge user base with other large companies backing it thus making the technology future proof and a great choice for Penneo. 4.1 .1 R eactJ s C om pon en t T ra v ers in g Neither Mocha nor Jest offer a solid way of traversing through ReactJs components. This has led us to have to look for an external library that would help us navigate through ReactJs components. Here three libraries are considered; Enzyme, ReactJs TestUtils and JSDom.
The Enzyme syntax is very similar to JQuerys making it extremely adoptable as a vast majority of the web is accostumed to Jquery ( ?https://w3techs.com/technologies/details/js-jquery/all/all ? ). There aren’t many alternatives to Enzyme as it achieves a rather unique and niche goal.
The main alternatives to Enzyme are React TestUtils ( ?https://reactjs.org/docs/test-utils.html ? ), a native ReactJs library, and JSDom ( ?https://github.com/jsdom/jsdom ?). Considering these tools as alternatives to Enzyme could be a stretch. Reason being that each of these tools only achieve a fraction of the what Enzyme is capable of.
Unlike TestUtils and JSDom, Enzyme also allows tests to analyse the state of a component throughout its lifecycle. It also allows tests to directly change the state or props of a component making it very easy to test Pure Components ( ?http://lucybain.com/blog/2018/react-js-pure-component/ ? ). TestUtils and JSDom only consist of a fraction of the functionality Enzyme has and mainly focus on traversing through DOM elements. This has led to the final decision being highly lenient to Enzyme as ReactJs specific tests would be able to be created in a more generic, quick and reader friendly manner.
Enzyme would also help test ReactJs components more thoroughly making seemingly random ReactJs errors a less common occurrence. http s://m ed iu m .c o m /a ir b nb-e n gin eerin g/e n zy m e-ja v asc rip t- te stin g-u tilitie s-f o r-re act- a 4 17e5 e5 090f http s://ja v asc rip tp la y gro und .c o m /in tr o ductio n-to -re act- te sts -e n zy m e/ http ://a ir b nb.i o /p ro je cts /e n zy m e/ 21http s://h ack ern oon.c o m /a p i- te stin g-w it h -je st- d 1ab 74005c0 a http s://f a ceb ook.g ith ub.i o /je st/ http s://h ack ern oon.
c o m /im ple m en ti n g-b asic -c o m ponen t- te sts -u sin g-je st- a n d-e n zy m e-d 1d8788d627a 4.2 F unctio nal t e sts 4.2 .1 T est f ra m ew ork For creating and running functional tests three the most popular test frameworks such as Mocha, Jasmine and Cucumber had been considered. This section describes each of these frameworks briefly, compares them in terms of following requirements and elaborates on why it was a fairly easy decision for choosing Cucumber and what makes it a suitable test framework for functional tests of application’s frontend. There were two main the most desired features while looking for a test framework for automated UI functional tests: – Support for Behaviour Driven Development (BDD). – Ease of reading and understanding tests by non programmers.
The reason for that is to initiate collaboration between business facing people – Penneo client support team and developers. Support for browser automation utilities is also a requirement as the testing framework is supposed to work alongside with it. (Ref ?li n k t o : 2 .2 R eq uir e m en t S pecif ic ati o n ?) Mocha Starting from arguably the most popular ( ?http://www.
It got the attention while deciding upon what testing framework should be used for UI tests mostly because of the fact that it supports BDD and because of its popularity and huge developer community. Mocha has been used to test everything under the sun, including functional UI testing so the decision to take a closer look at this framework from the perspective of functional testing was made. Mocha as a biggest testing framework is compatible with most (if not all of them) browser automation tools like Selenium, Protractor, WebdriverIO, etc. 22By default mocha provides BDD style interface, which specifies a way to structure the code into test suites and test cases modules for execution and for the test report. Mocha’s tests suites are defined by the `describe` clause and the test case is denoted by the `it` clause. Both clauses accept callback functions and can nest inside each other, which means that one test suite can have many inner test suites which can either contain another test suites or the actual test cases.
The API provides developers with hooks like: `before()`, `after()`, `beforeEach()`, and `afterEach()` which are used for setup and teardown of the test suites and test cases. Mocha allows also to use any library for assertions such as ChaiJS or Should.js which are great BDD assertion libraries. ( ?https://mochajs.org/#assertions ?) This is very useful for unit tests or for functional tests that are going to be seen only by developers, but may not be preferred format for business-facing users. Test specification mixes up with the actual implementation of tests and it is not an especially friendly approach for non programmers and could actually complicate communication between both sides. Jasmine From this point of view Jasmine ?( ?https://jasmine.
github.io/ ?) ? is very similar to Mocha, it is also perfect for unit testing. It is compatible with many browser automation tools. Similarly to Mocha its tests are also supporting BDD style written tests. ( ?https://jasmine.github.io/tutorials/your_first_suite ?) This is what both of these frameworks have in common.
The `describe` clause defines a test suite and the `it` implements the BDD method that is describing the behaviour that a function should exhibit. Test case denoted by the `it` clause is build in the same way as in case of Mocha. Its first parameter is a description of the behaviour under test written in plain-English and the second parameter is the implemented function which calls assertions that either pass or fail, depending on whether the described behaviour was, or was not confirmed. Both of them are fine choices if separation of the textual specification that everyone can read and write, from the technical implementation is not required.
Due to not meeting this requirement Jasmine same as Mocha has been dismissed as inadequate. Cucumber Cucumber ?( ?https://cucumber.io/ ?) ? is more user story based testing framework which allows writing tests in BDD style.
It targets higher levels of abstraction. Instead of writing tests purely in code like it is done in Mocha or Jasmine, Cucumber separates tests into human-readable user stories and the code that 23runs them. Cucumber uses Gherkin language to write its tests. Gherkin is a Business Readable, Domain Specific Language ( ?https://martinfowler.com/bliki/BusinessReadableDSL.html ?) that makes it possible to describe software’s behaviour without going in to details of how that behaviour is implemented. Since the tests are written in a plain language and anyone on the team can read them that improves communication and collaboration of the whole team.
The Features are written in a natural language therefore business people can read them easily, give developers early feedback or even write some themselves. Another great benefit of having functional tests made with cucumber is that those tests provide a living documentation, describing how the UI interface is supposed to work for each specific feature. 4.2 .2 B ro w se r a u to m atio n In order to implement functional tests, just like in a case of the unit tests an extensive research have been made. Different browser automation options have been looked into and compared to find the best solution for testing some of the crucial parts of Penneo web application.
Installing and working with all of them on the virtual machines would get annoying and inconvenient very quickly. Important was also a support for the headless browsers like PhantomJS or Headless Chrome, as execution of tests using these browsers makes the development of tests much more convenient as developer does not need to watch multiple browser windows reload, change and so on. Fortunately it turned out that all of the most popular browser testing frameworks already support these features. A key factor was also a support from the developer community to make sure that the framework that would be chosen will stay popular for another few years and there will not be a need for changing it. Support of a wide community also brings benefits in terms of new useful plugins or extensions and makes it easier to find answers on websites like StackOverflow for inevitable problems and errors.
w3.org/TR/webdriver/ ?) In other words WDIO is a testing utility enabling writing automated selenium tests by providing a variety of tools for controlling a browser or a mobile application. WebdriverIO provides entirely different and smarter bindings for the WebDriver API in comparison to Selenium’s which is the most popular browser automation 25library on the market and its Node.js implementation is called WebDriverJS.
Naming around these technologies can get a little bit tricky and confusing since they all use a similar names. Figure ?: ?WebdriverIO, Selenium server and Chrome test communication flow WebdriverIO, WebDriverJS and Nightwatch communicate over a restful HTTP API with a WebDriver server (usually the Selenium server). The restful API is defined for all of them by the W3C WebDriver API ?.
(source) Recently WebdriverIO had a major version change with a rewrite of the whole code base into ES2016. As the project community is growing dynamically the project by itself and its plugins are also being developed rapidly. One of the biggest changes that has been made was splitting the entire project into multiple packages which has a positive impact on new plugins that are being developed to write better tests in a simpler way. WebdriverIO framework has an integrated test runner which is a great advantage that helps to create a reliable test suit that is easy to read and maintain.
It supports three main test frameworks which are Mocha, Cucumber and Jasmine. After the newest major version release the WDIO test runner handles async ( ?https://developer.mozilla.
org/#/api ?) has been ?dismissed as inadequate as a first one, because even though it has support for the same test runners as WebdriverIO and pretty good reporting mechanism its main focus is to test Angular applications as it has build in support for AngularJS element identification strategies ?.( ?https://www.protractortest.org/#/api?view=ElementFinder ?) ?Also unlike all of the other compared frameworks it is only implemented as a wrapper for WebDriverJS which creates one more layer in between Selenium and the Protractor system tests ( ?https://www.protractortest.org/#/ ?) ?. It means that its highly dependent on WebDriverJS project, so whenever there is an issue with a WebDriverJS, Protractor needs to wait for the WebDriverJS team to fix that.
NightWatch NightWatch ?( ?http://nightwatchjs.org/ ?) ?is very similar to WebdriverIO since it is also a custom implementation of ?W3C WebDriver API – works with the Selenium standalone server. An interesting feature in this tool is that it has its own testing framework, the only external testing framework that is also supported is Mocha. One may see this as an advantage as it solves a headache of choosing a testing framework especially that it also implements its own assertions mechanism. However in our particular case the lack of support for Cucumber framework was a major disadvantage and main reason for not choosing NightWatch, the reasoning for that will be explained further in this paper. In comparison with ?WebdriverIO ?, Protractor and ?WebDriverJS ? it has slightly less support which was also an important factor when comparing these technologies. An important thing to point out is that all of the mentioned system automation tools support cloud browser testing providers, such as SauceLabs and BrowserStack which is a very desired feature. Those services allow developers to run their automated tests in cloud based browsers to ensure websites run flawlessly on every browser, operating system and device.