Certifications vs Masters

Hi reader!

Today I want to talk about a topic that may interest you, and its something that comes around the train of thought whenever you want to try to go upper in your professional career. Is it better to invest in achieving certifications on different technologies or should I go for a masters degree?

24763410391_e5928ee991_z

There are some aspects that we should consider in order to choose between each other and for me they are: Investment, Market and Yourself.

Investment

On this first point to discuss, you should see if the investment on whatever choose will make, will be returned or will give you more than that. Simple.

If getting a certification will give you a raise or a better position at your work, then go ahead. If achieving a master will get you travel around the world doing conferences then go ahead.

Here depends on the needs that each one of us have in particular, but one thing that is common is that you should evaluate the return of investment that you will have at the end of the day, whether is money, new job/position, better life quality, open a business,etc.

Market

Other aspect that might give you the answer and is kind of related with the previous one is to see the trends on the market on both options in the geographical zone that you are willing to work.

Who knows? Maybe Mexico has a lot of companies that are willing to pay more money on job openings for certificated people only or in Australia there is a trend for people that have achieved a masters for investigation or science related companies.

This can be also more specific, at a company level for example, when maybe Microsoft or Oracle prefer more people certified on its technologies.

This depends too much on the company that you want to work for, or the kind of opportunity that you want to take, for example to going and travel around the globe creating your own products or spread your knowledge.

Yourself

I think when you achieve something, there is a new challenge waiting for you, and the intention of complete a goal that you think and trust that will make you stronger, that feeling takes all.

Without caring all the economical aspect, or professional in some instance, at the end it counts on how you feel, and the enthusiasm that you get when you imagine yourself when you get to the finish line.

I have done both ways, and to tell you the truth some of the skills that I learned along the way, I have not used in my daily work, or maybe unconsciously, but I dont regret investing in them, as Im driven by the feeling of completion and by getting better and better on myself and my career.

Hope you find this tips useful and stay tuned for more content.

Thanks!

 

 

 

Flux vs Redux: Creating the Warehouse.

In the last post, we learn how to create reducers and the rules that must be followed in order for the container to work correctly. Now is time to see how we request data from the components by itself.

First we need to configure our store, which is the container where all the state of the application would be.

import { applyMiddleware,createStore} from 'redux';
import reduxImmutableStateInvariant from 'redux-immutable-state-invariant';
import reducers from '../reducers';
import thunk from 'redux-thunk';

export default function(initialState) {
  const store = createStore(reducers, initialState, 
      applyMiddleware(thunk,reduxImmutableStateInvariant()),
  );

  return store
};

Here Im importing some required libraries in order to build the store.

First Im using from the redux library itself the modules applyMiddleware and createStore.

The module applyMiddleware allows you to execute any kind of functionality whenever the pipeline of redux is triggered out. Works pretty similar as the middleware pattern that follows Express.js whenever you develop HTTP endpoints.

Express Middleware from ExpressJs Web Site.
Express Middleware from ExpressJs Web Site.

The createStore module is the one responsible for creating our container. Here it receives the list of reducers of the whole aplication,the optional initial state and any kind of middleware required to executed by placing it as an argument inside the applyMiddleware function.

The ones provided here come as npm packages, so you can find the ones that you need and just plug and play them on your app.In my case redux-inmutable-state-invariant allows to check that your state is not being mutated and redux-thunk is required as our API access uses the thunk pattern so we required redux to know about it.

The list of reducers would be the following one:

"use strict";
import { combineReducers } from 'redux';
import { routerReducer } from 'react-router-redux'

import booksReducer from '../reducers/books/booksReducer.js';
import selectedBook from '../reducers/books/selectedBookReducer.js';
import feedbacks from '../reducers/feedbacks/feedbacksReducer.js';
import feedbackDetails from '../reducers/feedbacks/feedbackDetailsReducer.js';
import positionReducer from '../reducers/position/positionReducer.js';
import positionDetails from '../reducers/position/positionDetailsReducer';
import ajaxReducer from '../reducers/common/ajaxReducer.js';
import candidatesReducer,* as fromCandidates from '../reducers/candidates/candidatesReducer';


const reducers = { 
                   ajaxCallsInProgress:ajaxReducer, 
                   books: booksReducer,
                   candidates:candidatesReducer,
                   selectedBook: selectedBook,
                   feedbacks: feedbacks,
                   positions: positionReducer,
                   positionDetails: positionDetails,
                   feedbackDetails: feedbackDetails,
                   routing: routerReducer
};
export default combineReducers(reducers);

Here Im importing all of my reducers, and merging them so all actions can be passed to every one of them and execute the proper state changes by using the module given by Redux which is the combineReducers.

Now that we have our store,reducers and actions created. We need to find a way where they can be acceses inside our components in HTML. Stay tunes I we will find it out on the next chapter.

Flux vs Redux: Now is Redux turn.

We are back! And in this new episode we are going to cover what Redux is and what brings to the table in the frontend development nowadays.

 

Redux Logo
Redux Logo

If you read the previous episode on these series, we went into the nuts and bolts on Flux which is a pattern to develop React Applications using a Unidirectional Data Flow Approach.

Actually the pattern, looks quite nice, but some of the disadvantages of it is that with a large scale application or one with a lot of components involved there is a lot of boilerplate code involved talking about raising actions, stores and event registration between the component DOM events.

What Redux is, is practically an abstraction of many of the complexity that Flux has by providing a single store unit, actions , and framework event handling through a concept called “reducer”.

A reducer is a function that receives the current state of the app, and the triggered action.

export default function(state = initialState.feedbacks, action) {

  switch(action.type) {

    case CREATE_FEEDBACK:{
      return [...state, action.payload];
    }

    case EDIT_FEEDBACK:{
      let indexOfEditElement= _.findIndex(state,(item)=>{return item.id === action.payload.id});
      return [...state.slice(0,indexOfEditElement),
              action.payload,
              ...state.slice(indexOfEditElement + 1 )]
    }

    case LOAD_FEEDBACKS_SUCCESS:
          return action.payload;


    default: {
      return state;
    }
  }
}

There is a single piece of state that will hold all of your application behavior across components, so you will receive it in here along with the action that was triggered, and pretty much similar to the stores, you’ll have to see which type of action was used to modify the state.

Up to this point things are quite similar, but actually the beauty of Redux is on the way things get returned from its reducer function.

It turns out that the reducer function must be what is called a “pure” function.That means a function which actually doesnt change the state or the structure of the elements that are being sent to it, instead it returns a new one.

Thats the beauty of Redux, as it you must return a new state on every action call, the Framework doesnt need to check which was the property that was changed,add or deleted. Instead bring a new one including the changes made thanks to this action. That principle is also know as Inmutability.

As you can see on the code, I return a new array, including the payload coming from the raised action. An action is simply a Javascript object that requires a “type” property and the rest is up to you. You can send whatever you want, in this case I just wrapped everything in a payload property.

export default function(state = initialState.feedbacks, action) {

  switch(action.type) {

    case CREATE_FEEDBACK:{
      return [...state, action.payload];
    }

    case EDIT_FEEDBACK:{
      let indexOfEditElement= _.findIndex(state,(item)=>{return item.id === action.payload.id});
      return [...state.slice(0,indexOfEditElement),
              action.payload,
              ...state.slice(indexOfEditElement + 1 )]
    }

 

On the next chapter we will see how we can raise actions and hook it up to reducers inside our React Components.

Adventures in React: Flux vs Redux

After developing simple applications with React I started to think of the things I would require to do in order to develop an application using this technology on a more real world scenario.

Then the research started, and as I expected, a lot of options are available, being the most famous an architectural pattern called Flux which was pretty good indeed,The general idea and the ‘marketing’ against MVC had some interesting points of view from the perspective of the guys at Facebook.

Flux Logo by Facebook
Flux Logo by Facebook

I started to play with it, and I liked the general workflow of development with Flux.

You first start to create actions, which you can raise it through a Javascript object called dispatcher will be responsable to pass this action and its data to all of the stores associated to it which are like controllers that with react on an action in specific.

An action is practically a Javascript object with a property called actionType which will be passed to all the stores in order to see which of them can process the action that was raised:

createCourse: function(course) {
    var newCourse = CourseApi.saveCourse(course);

    //Hey dispatcher, go tell all the stores that an author was just created.
    Dispatcher.dispatch({
      actionType: ActionTypes.CREATE_COURSE,
      course: newCourse
    });
  },

As you may see, its a simple function that Im able to call inside a React Component and bind it to some HTML event.

Then we have our stores, which will the pieces that the data retrieval and manipulation will be take care of:

Dispatcher.register(function(action) {
  switch(action.actionType) {
    case ActionTypes.INITIALIZE:
      _courses = action.initialData.courses;
      CourseStore.emitChange();
      break;
    case ActionTypes.CREATE_AUTHOR:
      _courses.push(action.course);
      CourseStore.emitChange();
      break;
    case ActionTypes.UPDATE_AUTHOR:
      var existingCourse = _.find(_courses, {id: action.course.id});
      var existingCourseIndex = _.indexOf(_courses, existingCourse);
      _courses.splice(existingCourseIndex, 1, action.course);
      CourseStore.emitChange();
      break;	
    case ActionTypes.DELETE_AUTHOR:
      _.remove(_courses, function(course) {
        return action.id === course.id;
      });
      CourseStore.emitChange();
      break;
    default:
      // no op
  }

Here I handle four different types of logic depending on the action that is being received. Notice that there is some emitChange function that comes from Node’s EventEmmiter which is declared before the registration process:

"use strict";

var Dispatcher = require('../dispatcher/appDispatcher');
var ActionTypes = require('../constants/actionTypes');
var EventEmitter = require('events').EventEmitter;
var assign = require('object-assign');
var _ = require('lodash');
var CHANGE_EVENT = 'change';

var _courses = [];

var CourseStore = assign({}, EventEmitter.prototype, {
  addChangeListener: function(callback) {
    this.on(CHANGE_EVENT, callback);
  },

  removeChangeListener: function(callback) {
    this.removeListener(CHANGE_EVENT, callback);
  },

  emitChange: function() {
    this.emit(CHANGE_EVENT);
  },

  getAllCourses: function() {
    return _courses;
  },

  getCourseById: function(id) {
    return _.find(_courses, {id: id});
  }
});

Great! Now in our component we just simply raise the action and we are practically done in the Flux process:

saveCourse: function (event) {
   event.preventDefault();

   if (!this.courseFormIsValid()) {
     return;
   }

   if (this.state.course.id) {
     CourseActions.updateCourse(this.state.course);
   } else {
     CourseActions.createCourse(this.state.course);
   }

 

And thats all! Next post we will get our hands dirty on Redux! so stay tuned!

 

 

Adventures in React : Azure

Hi readers! Today Im going to blog about deployment.Yes, deployment, and on one of the biggest cloud providers that are available around which is Microsoft Azure.

Microsoft Azure provides a bunch of features, but I only required something very basic because the application that I was about to deploy was a game developed with React in which all the logic was implemented in the client side and there was no server interaction at all. So I just needed something pretty basic which was to upload a simple static page.

So I just went to Azure, and looked at the different deployment options that Azure provides,  and I looked at one that could really help me which was the one in which you can deploy via Github.

I followed the wizards, and pushed some buttons and Voila! Nothing was displayed on my page.

After some research, I found that on every deployment a tool called Kudu runs some processes whenever you deploy something.

Kudu Logo
Kudo Logo from https://github.com/projectkudu/kudu

Then I found that Kudu runs an script that depends on the type of technology of your application, and if there is a package.json it will asume that your application will be NodeJS so I predefined script will run in order to make it work with IIS.

This was bad for me as I actually needed Node just for only download the required libraries from NPM and no more, and by some reason Kudu looked for something inside my application in order to start the Node server.

After a lot of trials and research, It turned out that your able to create your own Kudu script and upload it along with your application and Azure will run it for you instead of the one provided.

So I started to analyze this file and by reading the comments on it, the script runs with the following steps:

1.-Select Node Version:

IF DEFINED KUDU_SELECT_NODE_VERSION_CMD (
  :: The following are done only on Windows Azure Websites environment
  IF EXIST "%DEPLOYMENT_TEMP%\__nodeVersion.tmp" (
  echo Primer
    SET /p NODE_EXE=<"%DEPLOYMENT_TEMP%\__nodeVersion.tmp"
    IF !ERRORLEVEL! NEQ 0 goto error
  )
  
  IF EXIST "%DEPLOYMENT_TEMP%\__npmVersion.tmp" (
  echo Segundo
    SET /p NPM_JS_PATH=<"%DEPLOYMENT_TEMP%\__npmVersion.tmp"
    IF !ERRORLEVEL! NEQ 0 goto error
  )

  IF NOT DEFINED NODE_EXE (
  echo Tercer
    SET NODE_EXE=node
  )

  SET NPM_CMD="!NODE_EXE!" "!NPM_JS_PATH!"
) ELSE (
  SET NPM_CMD=npm
  SET NODE_EXE=node
)

In here,Azure retrieves the most recent version of node and adds it to the respective folders in order to run commands.

2.-Install NPM packages:

echo :: 1. Install npm packages
echo %DEPLOYMENT_SOURCE%\package.json
IF EXIST "%DEPLOYMENT_SOURCE%\package.json" (
  pushd "%DEPLOYMENT_SOURCE%"
  call :ExecuteCmd !NPM_CMD! install
  IF !ERRORLEVEL! NEQ 0 goto error
  IF EXIST "%DEPLOYMENT_SOURCE%\webpack.config.js" (
    call :ExecuteCmd !NPM_CMD! run dist
    IF !ERRORLEVEL! NEQ 0 goto error
  )
  popd
)

On this step, you can call node commands, so then at this point what I just do is to install the required npm packages and after it generating the dist folder by running here the  custom “dist” command inside package.json that I created.

3.-Kudu Sync

echo :: 3 KuduSync
IF /I "%IN_PLACE_DEPLOYMENT%" NEQ "1" (
  call :ExecuteCmd "%KUDU_SYNC_CMD%" -v 50 -f "%DEPLOYMENT_SOURCE%\dist" -t "%DEPLOYMENT_TARGET%" -n "%NEXT_MANIFEST_PATH%" -p "%PREVIOUS_MANIFEST_PATH%" -i ".git;.hg;.deployment;deploy.cmd"
  IF !ERRORLEVEL! NEQ 0 goto error
)

At this point, the Kudu Sync Command, is executed where practically just copies the content generated from the dist folder to the DEPLOYMENT_SOURCE which is a directory already defined by Azure inside the script.

After all of this, my application was happily deployed and it runs perfect on Azure but by some reason the customization of the script not very intuitive to find but anyway it worked in some way. Thanks and I hope this to be helpful for you.

Adventures in React:More Webpack

Continuing with this whole trip around the world of React, we left off in analyzing the configuration that the react-webpack Yeoman generator provides to develop apps.

I started to see that the default settings that the generator provided wasnt not clear from all, and then I started to make my own personal configuration for webpack, (not sure if that was the real reason or my wish to learn webpack so bad).

I started then with an empty webpack config file, and started typing the big Javascript object required by it. As I was typing I started to see from the Yeoman Template that the configurations for prod and dev were different, as for example the Hot Replacement feature were not included in prod and also there were some extra webpack plugins registered in the prod application.

I decided that in the future I will make a separation of them and merge them depending on the environment requested. I took a shot on dev first.

First I started with my entries, in which I figured that some weird urls inside of them:

entry:{
        bundle:['webpack-dev-server/client?http://127.0.0.1:8080',
            'webpack/hot/only-dev-server',
            './app.js']
    },

It resulted that after a lot of research and trials, these urls are required by webapack if you pretend to run it inside your code via NodeJs instead of console commands as node server.js for example, where in this file you have the following server initialization with the Node API provided by webpack:

new WebpackDevServer(webpack(config),{
        contentBase: 'public',
        historyApiFallback: true,
        hot: true,
        publicPath:'/public/assets/js/'
})
    .listen(8080, 'localhost', (err) => {
        if (err) {
            console.log(err);
        }
        console.log('Listening at localhost:8080');
        console.log('Opening your system browser...');
        open('http://localhost:8080/webpack-dev-server/');
    });

Here we have a WebpackDevServer  object that initializes a node server with the configuration provided in a config variable, which is the big object literal that we will build, and that is passed to the webpack function to process it 

As a second argument, you must pass the config settings that are particular to the server provided by webpack. I thought that it would be simpler if I include this settings also in the config variable, I tried  and it didn’t work out for me. By some reason, the second argument has an special treatment and it must provide its own settings here,

After creation we can call the listen method and the server wrapped with webpack sugar will start.

Then, back to our file, apart from these urls I included the starting point of the dependency tree.

After that, the output section where you place the destination of the final bundled file and also this weird value called public path.

I got a lot of confusion of it as you dont know if the html that consumes your bundled file should be placed in this path marked as public.

I then understood that this path is just some kind of telling the files of the web application will be accesible from the placed url, but physically in can be in some other place.

output:{
       path:path.resolve('build/js/'),
       publicPath:'/public/assets/js/',
       filename:'[name].js'
   },

Then we have our loaders which are pretty self-explanatory and our plugins section where a HotModuleReplacement plugin must be included for Hot Replacement features:

module: {
        loaders: [
            {
                test:/\.js$/,
                exclude:constants.ROOT_PATH + '/node_modules/',
                loader:'react-hot!babel-loader'
            },
            {
                test: /\.css$/,
                exclude: constants.ROOT_PATH + '/node_modules/',
                loader: 'style-loader!css-loader'
            },
            {
                test: /\.scss/,
                exclude: constants.ROOT_PATH + '/node_modules/',
                loader: 'style-loader!css-loader!sass-loader'
            }
        ]
    },
    plugins: [
        new webpack.HotModuleReplacementPlugin(),
        new webpack.NoErrorsPlugin()
    ]

And thats it for development, we have a nice configuration that handles the most required values to have a decent development ready for webpack.

Next episode we will tackle onto production so stay tuned!

Thanks for reading.

 

 

 

Adventures in React. Polishing Webpack

After taking the basics on React and understand its common pieces, I started to dig more deeper into the configuration provided in the yeoman template that I downloaded which was the React-Webpack.

As the first checkpoint, I looked at the webpack.config.js and I was shocked that there was not any configuration settings inside of it. What?

// Get available configurations
const configs = {
  base: require(path.join(__dirname, 'cfg/base')),
  dev: require(path.join(__dirname, 'cfg/dev')),
  dist: require(path.join(__dirname, 'cfg/dist')),
  test: require(path.join(__dirname, 'cfg/test'))
};

/**
 * Build the webpack configuration
 * @param  {String} wantedEnv The wanted environment
 * @return {Object} Webpack config
 */
function buildConfig(wantedEnv) {
  let isValid = wantedEnv && wantedEnv.length > 0 && allowedEnvs.indexOf(wantedEnv) !== -1;
  let validEnv = isValid ? wantedEnv : 'dev';
  return configs[validEnv];
}

module.exports = buildConfig(env);

It seems that this generator has different files for manage the most common environments of a front-end app. Depending the environment variable sent through the command line, it will pull out the file with its respective settings.

Configuration settings for Webpack
Configuration settings for Webpack

Seems that this configuration specific settings issue is pretty well managed by this generator.Yeih!

As I was going deeper into the configuration settings of the project I started to look at weird things like:

devServer: {
   contentBase: './src/',
   historyApiFallback: true,
   hot: true,
   port: defaultSettings.port,
   publicPath: defaultSettings.publicPath,
   noInfo: false
 },

And also some weird settings on entries and plugins into the configuration:

entry: [
   'webpack-dev-server/client?http://127.0.0.1:' + defaultSettings.port,
   'webpack/hot/only-dev-server',
   './src/index',

After a research in depth, there is a feature on Webpack-Dev-Server called Hot-Module-Replacement which in short terms it allows to update changes from your react components inside the UI.

Also the Webpack-Dev-Server is an small node web server customized by webpack for run your application code and see the visual changes of what the application would be with your assets bundled.

After a while I discovered that you can enable the HMR feature by running the command webpack-dev-server –hot but by some reason it was not used in this template.

So investigating how this project runs, I found that there was a server.js file which was the responsible to run the dev-server without the webpack-dev-server command which had this:

/*eslint no-console:0 */
'use strict';
require('core-js/fn/object/assign');
const webpack = require('webpack');
const WebpackDevServer = require('webpack-dev-server');
const config = require('./webpack.config');
const open = require('open');

new WebpackDevServer(webpack(config), config.devServer)
.listen(config.port, 'localhost', (err) => {
  if (err) {
    console.log(err);
  }
  console.log('Listening at localhost:' + config.port);
  console.log('Opening your system browser...');
  open('http://localhost:' + config.port + '/webpack-dev-server/');
});

I figured out that there is a Node API for webpack in which you can call the server by code, and pass it out some specific settings,and run a server pretty much similar as Express/Node Http server does.

If you choose this path you must enter in the entries the URLs listed above as at this point the server doesnt have access to this HMR config setting, and also declare an extra plugin for it

plugins: [
   new webpack.HotModuleReplacementPlugin(),

And also an extra loader for Babel:

loader: 'react-hot!babel-loader',

The template chose this path as you can send programatically the environment webpack settings to the server in order to see how the bundling would be depending the case.

As I begin to understand many things, I was having fun and getting amazed how this template handled this in a very elegant way.

Aventures in React. Entering the Webpack

In this series of keep digging into build applications with React, we left of at using an scaffolded template provided by Yeoman which provided all the setup required to build a React application with it.

As well as the other scaffolded templates, I reviewed what this template was doing for me in behing and started to look how this Webpack thing worked.

Getting our hands dirty.

Webpack high level diagram
Webpack high level diagram

Webpack is a module bundler and a dependency management system, in which you are able to call any kind of module (file) , inside any other module of your application and bundle/transform all of these files with “loaders”.

As I was analizing the documentation, books, videos and some other resources in order to understand Webpack I started to freak out as some concepts came to shine:”transpilation”,”loaders”,”modules”,”commonjs”.

Then after learning some of the theory and review the webpack configuration in the downloaded template, I started to play and identify some of the common pieces that are key to work with it.

First you need a file called webpack.config.js which will contain all the required configuration required by webpack in order to work in your application.

This one is practically is a simple json object which requires the following members in order to work properly:

module.exports={
  entry:{},
  output:{}
  module:{}
}

First we got the entry member which is mostly the main initial file where your dependency tree starts.

This is important, as actually I initially thought that you had to place all the file locations of your dependencies here, but actually is the main file where the dependencies starts to be required, in which Webpack automatically will travel across them and it will retrieve them for you. Sounds amazing right?

If you have files that are most global libraries and doesnt have a dependency tree per se like bootstrap, jquery, etc you can place them here as well and provide a different name,which actually would be the name of the bundle that will be generated along with the other one.

entry: {
    //Initial file where my app starts 
    app: path.join(constants.APP_PATH,'/app'),

   //All of my vendor files coming from package.json
    vendor: Object.keys(pkg.dependencies)
},

We defined our entry, now we need to define our output, which receives only the folder where you want webpack to place your transformed and bundled assets.

You can also name the file that will be your output, which can a be fixed one or you can place square brackets in order to tell webpack to generate more than one bundle based on the set of names that you defined in the entry section and optionally a hash as well.

output: {
    path: constants.BUILD_PATH,
    filename: '[name].[chunkhash].js?'
},

And finally the module section, which is the set of loaders that will be the responsible for transform/process files according to matches.

loader is simply a piece of functionality applied to a given file and transform it for simplicity. With these loaders we can transforms scss to css, ES2015 code to ES5 or JSX to ES5.

The common structure of the module goes like this:

module: {
    loaders: [
        {
            test:/\.jsx$/,
            exclude:constants.ROOT_PATH + '/node_modules/',
            loader:'babel-loader'
        },
        {
            test: /\.css$/,
            exclude: constants.ROOT_PATH + '/node_modules/',
            loader: 'style-loader!css-loader'
        },
        {
            test: /\.scss/,
            exclude: constants.ROOT_PATH + '/node_modules/',
            loader: 'style-loader!css-loader!sass-loader'
        }
    ]
}

Practically, you just place a loaders array, which be the set of loaders that you will use to process files.

Then each member in the array as an object with two must have keys:test and loader.

The test is a regular expression which will be used by webpack to point to the respective loader to process the coming file, which can be downloaded via npm actually.

As for example, in the first loader Im telling webpack that all files that end with jsx extension , pass through the babel loader so they can be transpiled and converted to ES5.

That’s it for this episode, I will be posting more content on this tool as I have been discovering some interesting stuff to apply in my modern web applications.

Adventures in React

Hi everyone!

Long time no see! In all of this time a lot of things happened in my life and I was a little disconected from this blog. But now Im back and I have the intention of helping the audience in choose the appropiate technologies to work in the beautiful world of choices that we have now to create things in the IT industry.

Today Im going to talk about React, as by destiny forces I required to be prepared in this technology for a potential client as the main reason and also because it is the new trendy stuff around.

React Logo
React Logo by Facebook

 

…So what is React by the way?

React is a Javascript library created by Facebook, which provides a rich api for give structure to the content inside the view layer of any web application through componetization.

So then… what is componetization? The main idea of this concept is that you build your whole application by sectioning it into small pieces of functionality called components, in which each one of them has its own behavior and comunicates across other components in favor of reusability and organization.

First Challenge. Getting Started

The first issue that I ran into was that there were a lot of ways to start a React project and I took some time to decide which one to choose.

Some of the concepts then started to popup like JSX ,gulp,babel and ES6 topics,webpack,hot-dev servers,etc along the blogs and the resources that I was reading.

I was kind of paralized of the whole technologies around working for react so I went through my friendly scaffolder friend Yeoman.

Yeoman picture by yeoman.io
Yeoman picture by yeoman.io

I went for a search on yeoman to find a template that could help me in startup quickly and I found two interesting ones: React-fullstack and React-Webpack.

Choosing the best option

At first I went through React-fullstack which actually looked very promising, as it provided a good separation of components and injections of SASS styles, along with Javascript classes and imports via Babel and a setup of already preconfigured settings of webpack which I required to understand in some part.

After digging the whole structure of this template and making changes of it, I saw that there was some functionality plugged for me in the back that were not totally at my control and that I didnt understood very well(or maybe yes but I didnt want to spend much time in this).

React Starter Kit Landing Page
React Starter Kit Landing Page

 

Then I looked at React-Webpack and it seemed to be a more basic template, which I was not pretty confident at all as I thought that some pieces that I wanted from the first template wont be present on this one.

But after reviewing through all of the parts composed in the template, I was taking more preference on this one as I had more control over it thanks to its simplicity.

I then started to integrate some bits from React-fullstack like having styles in each component and some other webpack and babel configurations to have extra features.

Component Organization in my project.

 

Babel configuration. You must download the stage-1 package to get extra ES6 features like fat arrow,and property initializers.

{
  "presets": [
    "es2015",
    "react",
    "stage-1"
  ]
}

I will place some more content in my way to learn react so stay tuned for more content.

Which automation testing tools are better?

Oooh Automation Testing. One of the fields that now has become popular as some of the most recent frameworks  provide a more complete set of APIs for testing it’s core components and also some software products came on the market to try to provide the most easy way to handle this high value functionality.

Both have pros and cons (as usual) , and here is the time where we put in the ring these two kind of tools in order for you to compare and see which is the one that better suits your needs.

For this match Im going to compare the tools Selenium and Test Complete (which I think are the most popular) on different aspects to consider when choosing an automation solution for an application.

Easy of use

Casual baby touching a mobile phone

Selenium as well as Test Complete provide a feature called recording, which when activated it starts to track all the whole actions that the user is performing on the browser.

This feature is good for people that doesnt have a lot of technical skills for doing test automation, because they can record simple tests and add them to the test stack.

Selenium provides a recorder called Selenium IDE which is a Firefox plugin only, which also is not too powerful  as it generates a bunch of output for complex testing scenarios, leaving as your only choice to code this types of tests with the WebDriver API.

Test Complete offers more enhanced recording capabilities and tooling to manipulate the actions captured by it.

With its “keyword testing” approach, it maps the components with some “alias” through a NameMappingFile which can be replaced as needed if the UI changes some time in the future. Also, while recording, you have the chance to check for specific data into the page through checkpoints, which are similar to the F12 Developer Tools element inspection experience for choosing the element to be checked inside the DOM.

For me, Test Complete wins this round as a simple user can have a good set of easy tests without ever having programming skills at all.

Programming capabilities

Depositphotos_62769623_l

Selenium provides its WebDriver API to code tests in a bunch of different languages including C#, Java and Ruby. So by itself it provides a lot of customization capabilities and also the power that the these programming languages have to create powerful automation frameworks.

In the other hand, Test Complete just provides the ability to create scripts within its own product , with JScript or VBScript.

The IDE capabilities for scripting are not as good as you find on Visual Studio or IntelliJ , but you can access to the whole Test Complete libraries and use them with these languages. The thing is that the languages are too old and its difficult to create a coded framework within the tool as the language capabilities are not as powerful as the most modern ones.

So if is not more than clear, Selenium wins here.

Reporting Results

business people meeting in the office

Every stakeholder in your team wants to see the results of your automation efforts so you have some visibility on the team and they can consider your work as high valuable, and what a better solution that showing them a report of your test executions.

Selenium doesn’t have the ability to generate test reports by itself , but I know they are some third party tools that with some hooks on the code you can create a very decent report.

I came to generate my own reporter by writing an html document from C# with its I/O API. So it depends how complex you want to get.

With TestComplete you have a more than good reporting capabilities. It logs every action and callstack that you can get. Also it captures screenshots and attaches them to the report which in Selenium you have to built that from the scratch.

For me,Test Complete wins this reporting round as it’s a plug & play feature and you dont have to worry on investing time and money to build this required capabilities for your automation scripts.

Continuous Integration

man with last jigsaw piece

Every automation engineer wants to reach to this step, integrating its automation efforts as a part of a building process provided by some CI products around there like TFS,Jenkins and TeamCity.

As Selenium tests are written against a test framework which is what the CI server uses to execute your tests, Selenium can be integrated as a part of a build process , or even in the cloud with Sauce Labs.

On the other hand, Test Complete also offers support for integrating your tests into this CI systems and on the cloud with its tool TestExecute.

So for this round, as both offer the same capabilities, I declare a tie between the two.

Pricing

Shopping woman shocked

So this round is one of the most difficult ones as it is an important aspect that everyone considers for taking one technology over another.

Selenium as you may know is totally free. You are able to use the technology for create tests as your own without spending a cent for using it , which is totally great. The problem comes when building the automation scripts for your application.

Building an automation framework is not easy task, which may require you to probably invest almost the same amount of money in development time, than the purchase of Test Complete license.

Test Complete price is high. Among all the features that it provides, probably can save you a lot of development time as some of them are built in, but there is not something that a good selenium framework can have.

So for me this round goes for Selenium.

IMHO

Robotic hand gather cube 3d. Artificial intelligence. Isolated o

The automation field is as important part of the development lifecycle and the creation of an automation framework should be considered as equally important as any kind of application.

These two powerful tools, provide a more than an acceptable testing environment to bring more quality to the application that the whole team is working on. The choice as always, depends on your team capabilities , going from technical knowledge to budget.

But the most important thing here is to bring test automation to your current project so you can be able to gain some time on important issues like regression and cross browsing testing, because a lot of rework comes when you try to track this important aspects, and of course, no one likes reworking on something right?

Thanks for reading!