Categories
Uncategorized

Why var_dump is not always the best solution – debugging with xDebug

Every software developer needs to check what’s under the hood from time to time. For beginners first choice will be naturally print_r paired with <pre> or “more advanced” var_dump. Even if there is nothing really bad with this solution there it can be done better.

Of course such approach may need additional effort to make it work but in most cases it’s worth trying to debug like a pro.

In this article we’re recommending Xdebug. This PHP extension will allow you to see what’s happening inside your app, check global arrays like $_POST or $_GET, look inside objects and watch which variables are currently set. Moreover – Xdebug allows to execute your app line by line and see what is happening during request.

As a bonus – Xdebug is good for testing ajax requests. Normally your endpoint is returning JSON or XML. When you use var_dump inside such output you will ruin document formatting and get an error in your JavaScript, which should receive and parse a response. Using debugger you just delay execution but after debug process output will stay untouched.

To start with Xdebug you first need to install and configure php extension.

For installation on linux (debian-based) systems you can use:

sudo apt-get install -y php5-xdebug

Then you need locate configuration file which can be found in:
– apache: "/etc/php5/apache2/conf.d/20-xdebug.ini"
– php-fpm: "/etc/php5/fpm/conf.d/20-xdebug.ini"

Base configuration of Xdebug which should work on any environment is:

xdebug.remote_enable=1
xdebug.remote_autostart=1
xdebug.remote_connect_back=1
xdebug.remote_port=9000
xdebug.idekey = "idesecretkey"
xdebug.remote_handler=dbgp
xdebug.remote_mode=req

Depending on the IDE you are using you need to configure it as well.

If you see Fatal error: Maximum function nesting level of ’100′ reached you should set xdebug.max_nesting_level = XXX where XXX is nesting depth = 200 (or more) in your configuration.

Configuring PhpStorm to use Xdebug on dev environment

For PhpStorm which claims “zero configuration”, so it’s as easy as setting IDE Key and port (if needed – by sticking with the key “PHPSTORM” and port 9000 then it’s already done for you) and starting listening for incoming connections.

Procedure is well described on PhpStorm zero configuration Xdebug manual so there is no need to copy it here.

Additionally if you want to provide your own IDE key or change listen port just provide them like described here. There is nothing more to do. If Xdebug extension is configured properly and IDE key and port match those in Xdebug config, plus when you’re debugging external host-path mappings are correct everything should just work.

Configuring Netbeans to use Xdebug on dev environment

1. Setup general NetBeans properties in Tools → Options → PHP → Debugging

2. Setup project properties in File → Project properties
– select in left menu “Run Configuration” and check or specify your project URL

– click on “Advanced” button under “Arguments” field and select “Do not open browser”. In this case NetBeans will not open new windows after debug session will be started.

3. Select in project configuration in right menu item “Sources” and specify small and but very important configuration option “Web Root”. Click on “Browse” button and select the folder where your init php script is placed.

Start Xdebug session inside your web browser

Install Xdebug plug-in for you dev browser and configure idekey for remote debugging.

After this you should enable debug plug-in in browser, set breakpoint and start debugging session in IDE, refresh browser page. IDE (if listening for incoming debug connections) should handle debug session and will focus cursor on first breakpoint.

Mozilla: addons.mozilla.org/EN-US/firefox/addon/the-easiest-xdebug/

Chrome: chrome.google.com/webstore/detail/xdebug-helper/eadndfjplgieldjbigjakmdgkmoaaaoc

authors: Sergii Shostak; Bartek Telesiński

Categories
Uncategorized

Have your organizational structure defined in Google Apps? Use it in your workflows!

Probably only a few people know about organizational structures in Google Apps admin panel. Usually, organizational structures in Google Apps are used to customize services access or settings for different users. I bet that even less companies use them to reflect their organizational structure.

Organizational structures in Google Apps

Although the functionality is not that popular (fore more information please see https://support.google.com/a/answer/4390551?hl=en), it may be beneficial in the context of business processes realization, especially if the organizational structure in your Google Apps configuration reflects the corporate structure of your company. If defined properly, the structure may be used by external applications that integrate with Google. Almost all organizations have some approval processes in which the decision chain follows the organizational structure in bottom-up direction. A simple example may be a vacation leave request issued by an employee which usually must be approved by his/her supervisor.

Flownie for your business process

If you need such functionality in your business process, Flownie.com is a good choice. Not only it integrates with Google Apps, but it also can import your organizational structure, so you can use the hierarchical relationships to automate your workflows.
How to use it? It’s simple. Assuming you have admin rights in Flownie, please:

  1. Select “Users” in right menu bar.
  2. Hit “Users” the the nested menu.
  3. You will see the list of users in your Flownie space with a “Synchronize with Google” button over the table with users.
  4. Hit the button ” Synchronize with Google”
  5. All users from your Google Apps domain will be imported to Flownie.
  6. In the “Users –> Teams” tab you will see the structure of your company taken from Google domain configuration. You can modify it (it will not affect Google domain settings since the synchronization is one-directional) or use it as it is.

Now, you model a process and you want a task to be realized not by a named person, but by a supervisor of a person from previous task. You simply need to set an Assignee. On the list of possible assignees there are also two roles “Activity initiator” and “Immediate supervisor”. You need to select an “Immediate supervisor” as an assignee – this will assure that the person responsible for that task will be a supervisor. The biggest advantage of such solution is that you don’t need to assign exact people, because the supervisors will be assigned according to their place in the organizational hierarchy.
If you want to try how to use organizational structures to automate your business processes, please go to flownie.com and get started!
author: Dominik Zyskowski

Categories
Uncategorized

Please, oh please, use git pull with rebase

This note was originally published on coderwall and makingco.de.

When working on a project you usually synchronize your code by pulling it several times a day. What you might not know is that by typing

git pull

you are actually issuing git fetch + git merge commands, which will result with an extra commit and ugly merge bubbles in your commit log (check out gitk to see them).

It’s much better to use

git pull --rebase

to keep the repository clean, your commits always on top of the tree until you push them to a remote server. The command will apply all your yet-to-be-pushed commits on top of the remote tree commits allowing your commits to be straight in a row and without branches (easier git bisects, yay!).

The result should be similar to

Creidhne:project hasik$ git pull --rebase
remote: Counting objects: 37, done.
remote: Compressing objects: 100% (21/21), done.
remote: Total 23 (delta 16), reused 0 (delta 0)
Unpacking objects: 100% (23/23), done.
From git.example.com:project/project
   9c56a5a..3e62251  master     -> origin/master
First, rewinding head to replay your work on top of it...
Fast-forwarded master to 3e62251c80998bf744f35d0d8e732e2cff01e072.

Note the “rewinding head to replay your work on top of it…”. It means that your commits are rebased onto current remote HEAD.

If you want to merge a feature branch it might be wiser to actually merge your commits (thus having a single point of integration of two distinct branches).

Conflict resolving will be now per commit basis, not everything-at-once, so you will have to use

git rebase --continue

to get to the next batch of conflicts (if you have any). On the upside your commits will still be on top of everything so you can change, rearrange or remove them before pushing your version to a remote repository.

NOTE: Because of many discussions about this note. I DO NOT encourage rebasing remote (public or shared) branches. Rebasing local history is OK (it’s more than OK, it’s sometimes necessary to maintain a clean history), but changing other people commits history is considered a bad practice.

author: Krzysztof Hasiński

Categories
Software Technology Uncategorized

Authentication in AngularJS (or similar) based application

Implementation of the concept described below and also a demo application is available here: https://github.com/witoldsz/angular-http-auth.


Hello again,

today I would like to write a little bit about how am I handling authentication in an application front-end running inside web browser, using AngularJS.

Traditional server ‘login form’? Well… no, thank you.

At the beginning, I did not realize that traditional and commonly used form based authentication does not suit my client-side application. The major problem lies in a key difference between traditional – server-side, and client-side applications. In server-side applications, no one else but server itself knows user state and intentions, whereas in client-side applications this is no longer true.
Let’s take a look at a sample server-side web application flow of events:

  • user asks for a web page: something.com,
  • server generates markup and sends it back,
  • user chooses to visit a secured sub-page: something.com/secured,
  • server figures out that user does need to authenticate itself, so it:
    – remembers what user asked for,
    – responds with a login form (or a redirect to) instead of a requested content,
  • once user sends credentials back to the server, it serves what user initially asked for,
  • user keeps visiting secured pages and filling secured forms until their authorization expires (for whatever reason),
  • server once again responses with a login form and once user provides credentials, server redirects them back whenever they wanted to go.

Same application, but different flow:

  • user asks for: something.com/secured/formXyz,
  • server sends a login form,
  • user logs in, fills a long and complicated form, but they are doing it so long that theirs session expires,
  • user submits a form, but since the session is not valid anymore, login screen appears,
  • once user logs in, server can process the submitted form, no need to re-enter everything again.

Now let’s see how it is in client-side application, running inside a web browser:

  • user types somewhere.com in an address bar,
  • browser sends a “Content-Type: text/html” request,
  • server sends back a page with client-side application code,
  • code starts to execute and asks for user name (e.g. it wants to display user name in the upper right corner), so next request is issued, e.g.: “Content-Type: application/json”
  • traditional login form does not make sense here, as browser is not requesting a web page, but some data instead. Something is not right here.

OK, so let’s try other way around:

  • user asks for somewhere.com
  • entire site is hidden behind a login form authentication mechanism, so instead of an application, user is presented a form, so they can provide credentials,
  • once user submits, the originally requested page is provided, so as it was before: application loads, issues a new data request (application/json) for user name and receives it back, everything is nice so far… but let’s assume that our session expires (for whatever reason) while we are in the middle of a long and complicated form…

Guess what? We are exactly in the same place as before: our application is up and running, but our session is not valid any more and form based authentication is useless at this point. Or isn’t it?
Let’s try to adapt. Using AngularJS, we can simply write an http interceptor. Such a interceptor can check every response and once it detects a login form, we can… well…

  • We can redirect ourselves to a login page, but this is a complicated task, because server does not know where we are at the moment (or to be more precise: what is our state, what were we doing). Remember, client-side application is client-side, how is server supposed to figure out what to do next, after we logged in? From server-side point of view we are sitting on one page all the time.
  • We can be smarter: we can bring an IFRAME to life, it will show login form. But this is also complicated: we have to figure out somehow what is happening inside such an IFRAME. How to detect successful login? Is it easy? Hard? Not that hard but tricky?

Of course everything is doable, but after investigation, I did something else. Very simple and clean, but requires server side adjustments.
Solution: client-side login form when server answers: status 401.
My solution assumes the following server side behaviour: for every /resources/* call, if user is not authorized, response a 401 status. Otherwise, when user is authorized or when not a /resources/* request, send what client asked for. No login forms, but we still need some login URL, so our application can send login and password there. Plain-old cookie based sessions? Why not, they work for me, web browsers and application servers handle them automatically by default.
AngularJS has a $http service. It allows custom interceptors to be plugged in:

[sourcecode language=”javascript”]
myapp.config(function($httpProvider) {
function exampleInterceptor($q, $log) {
function success(response) {
$log.info(‘Successful response: ‘ + response);
return response;
}
function error(response) {
var status = response.status;
$log.error(‘Response status: ‘ + status + ‘. ‘ + response);
return $q.reject(response); //similar to throw response;
}
return function(promise) {
return promise.then(success, error);
}
}
$httpProvider.responseInterceptors.push(exampleInterceptor);
});
[/sourcecode]

Nice thing is that from ‘response’ parameter we can rebuild the request. To fully understand how the $http interceptor works, we need to understand $q.
The goal is to be able to:

  • capture 401 response,
  • save the request parameters, so in the future we can reconstruct original request,
  • create and return new object representing server’s future answer (instead of returning the original failed response),
  • broadcast that login is required, so application can react, in particular login form can appear,
  • listen to login successful events, so we can gather all the saved request parameters, resend them again and trigger all the ‘future’ objects (returned previously).

Nice thing about the solution above is that when you request something, but server responds with status 401, you do not have (and you cannot) handle this. Interceptor will handle this for you. You will eventually receive the response. It will come as nothing had happened, just a little bit later (unless user won’t provide valid credentials).
OK, so here is a bit of code.

[sourcecode language=”javascript”]
/**
* $http interceptor.
* On 401 response – it stores the request and broadcasts ‘event:loginRequired’.
*/
myapp.config(function($httpProvider) {
var interceptor = [‘$rootScope’, ‘$q’, function(scope, $q) {
function success(response) {
return response;
}
function error(response) {
var status = response.status;
if (status === 401) {
var deferred = $q.defer();
var req = {
config: response.config,
deferred: deferred
};
scope.requests401.push(req);
scope.$broadcast(‘event:loginRequired’);
return deferred.promise;
}
// otherwise
return $q.reject(response);
}
return function(promise) {
return promise.then(success, error);
}
}];
$httpProvider.responseInterceptors.push(interceptor);
});
[/sourcecode]

[sourcecode language=”javascript”]
myapp.run([‘$rootScope’, ‘$http’, function(scope, $http) {
/**
* Holds all the requests which failed due to 401 response.
*/
scope.requests401 = [];
/**
* On ‘event:loginConfirmed’, resend all the 401 requests.
*/
scope.$on(‘event:loginConfirmed’, function() {
var i, requests = scope.requests401;
for (i = 0; i &amp;amp;lt; requests.length; i++) {
retry(requests[i]);
}
scope.requests401 = [];
function retry(req) {
$http(req.config).then(function(response) {
req.deferred.resolve(response);
});
}
});
/**
* On ‘event:loginRequest’ send credentials to the server.
*/
scope.$on(‘event:loginRequest’, function(event, username, password) {
var payload = $.param({j_username: username, j_password: password});
var config = {
headers: {‘Content-Type’: ‘application/x-www-form-urlencoded; charset=UTF-8’}
};
$http.post(‘j_spring_security_check’, payload, config).success(function(data) {
if (data === ‘AUTHENTICATION_SUCCESS’) {
scope.$broadcast(‘event:loginConfirmed’);
}
});
});
/**
* On ‘logoutRequest’ invoke logout on the server and broadcast ‘event:loginRequired’.
*/
scope.$on(‘event:logoutRequest’, function() {
$http.put(‘j_spring_security_logout’, {}).success(function() {
ping();
});
});
/**
* Ping server to figure out if user is already logged in.
*/
function ping() {
$http.get(‘rest/ping’).success(function() {
scope.$broadcast(‘event:loginConfirmed’);
});
}
ping();
}]);
[/sourcecode]

I wanted to provide a working example with a login window, using jsfiddle.net, but have to postpone it. It is getting a little bit late now, so I am finishing this entry here. I hope you like it 🙂
author: Witold Szczerba