Pages

Showing posts with label Front End. Show all posts
Showing posts with label Front End. Show all posts

Friday, May 19, 2017

React + MobX - 101 Todo App for beginners - Part 1

This walk-through-tutorial (yet another) is to guide you through the usage of the MobX library, while building a React application.

The application we'll build is a simple Todo App.

The post discusses the following topics:

  1. What is MobX - A very short explanation about what MobX is. 
  2. Why do you need it? - Reasons to consider using MobX.
  3. Why not Redux (another state management library)?
  4. Code tutorial - Part 1 (out of 2)

What is MobX - https://mobx.js.org/

In short - MobX is a state management library.

In "long" - MobX is a JS library that allows you to define "observable" state (/model/your JS Objects) containing your App information.

The library gives you the power to derive any computed model that depends on observables automatically!
As MobX puts it:
Anything that can be derived from the application state, should be derived. Automatically
In aspect to React:
This gives you the power to mutate the state directly by reference from your components without calling React's setState, and triggering render only on the components that need to be re-rendered.

Pretty nifty.

For more about MobX: https://mobx.js.org/

Why do you need it?

First of all, no one said you should use it. There are plenty React based applications that work just fine without MobX.

However, you should consider MobX for the following reasons, that may give you some advantage.
For instance:
  1. Simpler Components - Having a shared state in your App between several components can be cumbersome to share, especially if one component is deeply nested in another component that uses same data.
    It would mean, that you'd need to pass to all the children of the top-most component that state, resulting many components that are on the way passing on that same state, but never using it.
    Components that change state to other components, the state of your application distributed among many components, and other complications.
    With MobX, you can specify for each component the exact props it should use.
  2. Organized state model - Your application state with MobX is mostly organized in one or few objects that hold only the state of your application. That makes it easy to reason about the application's model.
  3. Optimized App - render only components that need to render instead of 

That sounds like I'm trying to "sell" you MobX.
I find it to be a good library for state management (not saying it's the only, check Redux too), despite the "magic" which I dislike in principal in software, and the awkward feeling I get when this reminds me of "two-way-data-binding" stuff (which the industry has a love-hate relationship since the ages of JavaFX, Flex, and probably even earlier libraries and frameworks).

It comes with a small cost of adding a library to your App, and learning how to use it.
If you have a small application, with a very simple model, you might not even need it.


Why not Redux?

In case you don't know Redux, as mentioned on their site is:
"A predictable state container for JavaScript apps."

In other words - it's a state managing library, like MobX is. But different.
The "what" is similar, the "how", a bit different.

So first of all - why not use it instead? 
There's really no good answer for that, and IMO it's a matter of personal style and what you're comfortable with. It does seem to have less "magic" though.
(The "magic" i refer to are the observables in MobX - this is where the MobX fun happens).

Some say that MobX is easier to get started with as there's less code to write.
It supports several (optional) “stores” vs one store in Redux which can be annoying if you already have an existing App and you need to start consolidating all your state to a single place. 
It conforms to a non-FP paradigm, where your state can be mutable (the observable objects in your state).

In any case, I would advise to try Redux before you make your decision.



Code walk-through


Lets get our hands dirty.
We're going to build the Todo App using MobX & React at last.

Step 0: Fork or clone the Git Repo
Go here and fork/clone it: https://github.com/nirgit/todoMobxTutorial

You will now have a skeleton application which is ready and set up, using ES6 with JSX & Webpack.
MobX is also included in the package.json, so you won't need to add it as a dependency.

Go to the project's directory and run from the console / shell:

> npm run start

Now go to http://localhost:8080

Your web page should look like this:




Step 1: Creating a plain React Todo App

Lets create our plain Todo application based on React only.
  1. Stop Webpack watching your changes 
  2. Go to the project's directory and run from the console / shell:
> git checkout step1
> npm install
> npm run start


Re-open http://localhost:8080, you should now see this:

Great!
You now have a functional React Todo app.
You can open the app.js file and see all the code pretty much.
You won't find anything unusual.
Note how the entire state though is stored on the App component, so once we will refactor the code a little bit, we will have to pass in the todos as a prop to the child components, and probably stuff like a callback to toggle the todos "isDone" field, so we can use setState in order to make it work.

Go back to your shell and now, checkout the step1_2 branch.
> git checkout step1_2


Open the App.js file and take a look at the render method.
See the line?
...
<TodoList todos={this.state.todos} toggleDone={this.toggleDone.bind(this)} />
...

Notice how we pass in to the TodoList component the toggleDone callback, which is still defined on the App component.  If you would decide to create Todo components, you'd probably pass on that callback into each of those components.
And so you will have a component which passes a function to a child (TodoList) which does nothing with that function but pass it on to its children, which affect a state that's found on its grand parent.
A bit Messy.
If you'd have this situation in a large scale application, not only it may be confusing, but it could be practically impossible to reason about the full state of your application at any given point in time.

Lets add MobX and see what happens.

Step 2: Adding MobX to our React App

MobX uses a very small set of core API.

@observable
The API's most significant and basis to all the MobX "magic" is the observable function / decoration (function if you decide not to use the decoration syntax from ES Next).
On this post we'll stick with the decoration as it feels (subjective) a cleaner way to declaratively state the fact a piece of data/object is being observed.

@observer
We have an "observable" data, now what?
We need someone to "observe" it, and react to the changes.
MobX being a library in its own right, does not provide an @observer for React Components "OOTB" ("out of the box").
In order to use such @observer we need to use the MobX-React NPM package.
Then we can "decorate" our React Components with an @observer decoration.

The reaction to the changes made to the observable object will result in calling the render method of the React component observing that object. Pretty cool.

Code example:


So every time you will change the bestMovies collection, the render of MovieList component will be triggered.


Stop your app from running if it is, go back to your shell and now, checkout the step2 branch.
> git checkout step2

and run the following to install missing missing NPM pacakges and re-run the app:
> npm install && npm run start


You shouldn't see any visual difference, but open the browser's Console and see what you got:


See how now you get a log print in the console when a component renders.
Try clicking a checkbox, you should get the following prints:

See how now you get only part of the prints?
Lets do a quick analysis:
  1. App Component - renders because its render function is using the isDone flags of the Todos in order to render the Progress Bar.
  2. ProgressBar Component - renders for obvious reasons.
  3. TodoList Component - renders because it passes all of the properties of the Todo objects, including the isDone property when rendering.
    It does that by using the ES6 spread operator (see in the code "{...t}")
  4. The TodoLine Component - A new component created representing the Todo lines, as it makes it clearer to see MobX's benefit.
    Notice how only the TodoLine Component whose line we changed rendered.
    All the rest - stayed the same without getting re-rendered.


You can run the quick following check.
Go to the todoLine.js file and change the following line:
export default observer(TodoLine);

to this line:
export default TodoLine;

See how we omitted the MobX observer?
Now go back to the browser, and try to check the same checkbox as earlier.
Keep your console opened. You should get the following prints:




Hopefully you got something out of this post, as it should give you an idea of what MobX is by now. Stay tuned for Part 2 blog post that will discuss topics like:

  1. Adding Actions and Action Handler
  2. Adding a Provider component
  3. Adding Containers
Or go explore more yourself on https://mobx.js.org/.

Enjoy,





MobX library image
 ReactJS library image


Monday, November 14, 2016

Bit.ly Resolver - Chrome extension

It's been a while since I last blogged about anything...

Something that's been bothering me for a long time since I've been on Twitter, are bit.ly links.

Naturally Bit.ly links got a serious boost in usage with Twitter's launch, and they serve a great tool of compacting links and embedding them beautifully inside Tweets.

However, many times, looking at those links, you can't help but wonder - "where will it take me if I click it? Am I going to get a worm or a virus on my laptop?"
Sometimes, people are just lazy to put any context telling you what this link is about, and so, they tweet or post a bit.ly link with their cat's photo.
(Usually it is not their cat's paw-n-shop ya' know...).

So being very lazy that I am, yet curious as a cat.. I decided to pick up the glove, and create my very own Chrome extension (sorry y'all other folks using other browsers), that shows you the REAL URL behind that bit.ly link you're gazing it right at this second, simply by hovering on it.

Welcome - Bit.ly Resolver Chrome Extension

You can simply click it, in order to install that extension on your browser.


Bit.ly Resolver - Chrome Extension




GitHub link for those of you who are interested: 

As a bit.ly :) : http://bit.ly/2fzMal8
Or simply as: https://nirgit.github.io/bitlyResolverCE/


See you in a bit.ly!

Saturday, May 21, 2016

Typescript, React and Sass - All in one - Skeleton Project built with Webpack

Just recently I had the need to set up a project that included 
Typescript, React, Sass and building it with Webpack, and couldn't find any good up-to-date source on the web for a 
skeleton project to base it on.

So I decided to create my own.

You can check it out (free of course) on my GitHub account:





Sunday, January 17, 2016

Spaceship X - Game Launch on Android Play



Spaceship X - Timeless Hero

A retro flavoured Spaceship shooter saving the Galaxy!


Exciting!
I have released my first Game App to Google Play!

It is free to download:
https://play.google.com/store/apps/details?id=nm.ibex.spaceshipx

The game's website:
http://leapingibex.wix.com/spaceshipx


Have fun!

Saturday, April 18, 2015

Simple Webapp example using Flux with React

If you've been following the latest Front-End news for Web for the past year,
you have probably seen the huge buzz around ReactJS.

Just in case you haven't, you can visit ReactJS on the Facebook Github's page at: https://facebook.github.io/react/ or just Google it and read about it.

So for those of you that do know what React is all about, you may have also heard about Flux.
If you haven't :) I recommend you to read about it a bit
here as well http://facebook.github.io/flux/docs/overview.html

If I have to try and sum it up on one leg, I'd say that Flux is an application architecture for building
big web-apps on the front-end that scale.  - Well, that's pretty vague :)

A main concept regarding this architecture is about a Uni-directional flow where an "Action"
(a Application Event) is triggered by the View part (Your components, which are shown the users),
using a "Dispatcher" (Some kind of an Event Hub) which propagates the Actions
on to the "Stores" (Your application's model), which hold the state (data) of your application,
perform the necessary update and logic, and updates the View back using listeners.

This illustration might help understanding the Flux flow a little better:









Why is Flux a good idea? - There are several reasons, but I will mention only 2 which I think
are the most important.

1. Simple Application Architecture - You can show this diagram to someone who never
heard of Flux before, and explain it in 2 minutes.

2. Scalable - Flux lets you scale. Meaning, it's relatively easy to add new features to
your application, debug it, and keeping it performant.

Why?
There's a really good talk about it by Facebook which is worth watching:


After all that being said, I really wanted to do some basic implementation of Flux myself,
to "feel" how it works. And so I created a very small Repository on Github that implements
Flux which you are welcome to look at and use.

The example code

The application built on top of that is a classic Todo App: https://github.com/nirgit/Flux1

The code uses several 3rd party libraries such as RequireJS and React Templates but even if you are not familiar with RequireJS, or with AMD, my guess is you will still be able to understand how
it works.
React Templates is an amazing 3rd party library that helps you implement the "render" method
of React Components in an HTML like syntax. Totally cool.

Good luck!




Sunday, June 16, 2013

Lottery for winning a copy of the "Android NDK Cookbook" is now opened

In continuation to last week's post for winning a free copy of the eBook "Android NDK Cookbook",
the lottery starts today!

3 Lucky people will be given a free copy of the "Android NDK Cookbook" sponsored by "Packt Publishing".


Description of the book

 Android Native Development Kit Cookbook will help you understand the development, building, and debugging of your native Android applications. You will discover and learn JNI programming and essential NDK APIs such as OpenGL ES, and the native application API. You will then explore the process of porting existing libraries and software to NDK. By the end of this book you will be able to build your own apps in NDK apps. 

Check it out on Packt Publishing: http://www.packtpub.com/android-native-development-kit-cookbook/book


How to participate?

Simply send an email by clicking here or by sending it to 
containing a subject line "Android NDK Cookbook lottery".




Deadline

The contest will close on 30th June 2013. Winners will be contacted by email, so be sure to use 
your real email address! 



Good luck !


Saturday, June 8, 2013

Win a free eBook copy of "Android NDK Cookbook"

Are you an Android developer or interested in Android development?
This might interest you!

On the 16th of June 2013 I will conduct a lottery on the blog, to win a free eBook copy of "Android NDK Cookbook".

The lottery will last for 2 weeks (until June 30th) where you will be able to participate to win a free copy of the book.

Description of the book:

"Android Native Development Kit Cookbook" will help you understand the development, building, and debugging of your native Android applications. You will discover and learn JNI programming and essential NDK APIs such as OpenGL ES, and the native application API. You will then explore the process of porting existing libraries and software to NDK. By the end of this book you will be able to build your own apps in NDK apps. 




Further details of the book can be found here: http://www.packtpub.com/android-native-development-kit-cookbook/book


Stay tuned for the 16th of June!


* The copy of the book is sponsored thanks to "Packt Publishing".

Sunday, May 26, 2013

Javascript Closure Functions Memory Model - How does it work?

Hey everyone,
For those of you who are writing code in Javascript or wish to do so, it is essential to understand
the concept of closures, and how they work.

I must say that when I started looking into it, it was somewhat difficult to find proper material
on the web, to get a good grasp of how things work. So this post's idea is to try and shed some
light on this matter. I don't guarantee that this will make you understand everything, but do hope
it will help in some way.

*Note: I could have some mistakes in this post of course, and would be very happy if you send me
any corrections or comment as this is suppose to help people, and show the power of Javascript closures. 
Everything that's written in this post is according to my understanding of how things work.
So regard this writing with care. I tried to do the best I could when I wrote this post.



The post topics will contain 2 parts:
1. What is a Closure?
2. How does the memory model look like.


1. What is a Closure?

"... is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables) of that function. A closure— unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside of its immediate lexical scope." - From Wikipedia.

A simple example to such Closure is a counter.
Consider the following script for example:

function createCounter() {
       var counter = 0 ;
       return function() {
              counter++ ;
              return counter ;
       }
}
 
var counter = createCounter() ;
counter() ;
// returns 1
counter() ; // returns 2
var counter2 = createCounter() ;
counter2() ; // returns 1
counter() ; // returns 3


How does this work?
The createCounter function being executed creates a new Execution Scope, which contains references to a variable named counter and returns an anonymous function.

So how does that function know about counter when it's executed?
In Javascript any function, has some kind of property called a "Scope Chain", which defines the scopes the function is bounded to. When we define a function in Javascript, the "definition scope" is such scope that is stored in the function's "Scope chain". So when we define and return the anonymous function in createCounter that anonymous function is binded to a scope where it was defined.
Thus, when the function is being executed by calling "counter()" a new "execution scope" is being created and added to the function's scope chain. So when the function needs to look up counter it looks up it's "scope-chain", and finds it eventually on the "definition scope" of the function.

Another thing to remember is that every time you run a function you run it inside a "context".

What is a context?
A context is the object the function is executed on. It is recognized with the this keyword.
It is defined when we write the following code in Javascript for example:

// context here equals to obj1 (this === obj1)
obj1.sayHello() ;

// context here equals to the global object which in case of a browser, is *window (this === window)
sayHello() ;

* It might be equal to undefined (this === undefined) in case we're running in 'strict mode'.

You can check more about it at MDN - Function Context.




2. How does the memory model look like?

I think it's important to understand how the memory model looks like, graphically, to get a better
notion of how things work. Even though you could handle without it, I find illustrations very
helpful understanding how something works.


Simple example

When you define a regular function, like:
function foo(number) {
   [ function_body ]
}

We get something of the following in memory:


If we execute foo(3), we will get something of the following in memory:


So when foo(3) is executed, a new "Activation object" is created.
Passing to it the function's parameters and arguments array, and adding it to the scope chain of foo(3).
So the next time foo will be looking up for the "num" parameter, it will be looking it up the
scope chain starting from the closest(last) scope that is relevant to the execution, which in this
case is of course the "Activation". If it won't find num there, it will continue to the next scope in the
chain, until it finds it.


Closure example

Lets take the Closure example we referred to earlier - the counter function and see how it 
would look like in memory.

Quick reminder, our counter function looks like this:

function createCounter() {
       var counter = 0 ;
       return function() {
              counter++ ;
              return counter ;
       }
}

This maps to a memory model which looks (somewhat) like this:

counter javascript closure
You might wonder why the Closure's scope chain also refers to the Global Object.
The reason is that when we create a closure, it copies the scope chain of the execution context scope chain which in this case belongs to createCounter().
What makes the Closure a closure is the interesting side affect in Javascript, that instead of the Activation Object being destroyed after createCounter execution, it remains alive, because there is a reference to it, from another function - the closure.

Now, lets create such counter by executing the following:

var myCounter = createCounter();  [1]
myCounter() ;                     [2]

So on line [1] we create myCounter which is a reference to the closure on the above illustration.
When we execute line [2], we get 1 as an answer, and the second time we will get 2.
How it looks in the memory is shown in the next illustration:




When we execute myCounter which is our closure in this case, we are incrementing the counter variable, but in order to do so, the variable needs to be reached somehow.
This is where the "scope chain" takes its place. The first "environment" where the variable is going to be looked up is at the top of the scope chain, in our case - the "Activation Object" of myCounter.
Since the variable is not an inner variable, nor a parameter, the look-up for the variable continues, and moves on to the next scope in the chain which is the "Activation Object" of createCounter ! There it is found, read and assigned. (As counter++ is actually counter = counter + 1 ).

Note that every time you call createCounter a new Activation Object is created with a new counter variable which equals to 0. This is how you can create as many separate counters as you like, without them affecting each other.


I hope this helped you in some way to understand the memory model of functions and closures a little better. Any comments / suggestions and complaints are most welcome.
Enjoy "closuring".

Friday, March 8, 2013

GWT Chrome extension with JSONP Server communication

I thought this post would be a good idea for people who want to know how to make an 'end-to-end'
GWT Chrome extensions, with a connection to a server of their own.

While this demonstrates specifically how to enable to communication from a Chrome extension, there is
no difference in code if you wish to deploy your client code else where (not on Chrome).
Many web applications choose to deploy their Server code & Client code on the same application server, like JBoss or Tomcat.

As you know in GWT there are several ways to communicate with a server.
One of them is RPC for instance, which gives you the ability to invoke methods on a Java interface,
not having to deal with anything but Java code. Which is cool, as it makes a very smooth transition from
the client side code to the server side code.

However, when we want to implement this in a Chrome extension things get a bit trickier.
The reason is that you'd have to separate your server module from your Chrome extension module
(as we can't deploy Server side classes to Chrome - it runs only JavaScript),
and would need a shared model between the two in order to communicate.
It's not hard to do at all, it's only a longer example to make.
(In Eclipse, if you create a new "Web Application Project" using Google's plugin, you will have the client & server code in the same project, with a "shared" directory which serves a model bridge between the two).

Another option that exists, is using JSONP communication to the server.
In our case - this makes it a lot simpler.
We will expose some REST API on a Server we'll implement, for getting images, and send a JSONP request from the QuickPik extension to retrieve those images. Obviously, it can be anything else you want it to be in your application.

TO THE CODE!!!

The Server

In order to create a REST service and expose it, we will use the Jersey library which is an implementation for
building RESTful web services. It's very simple.

The following QuickpikService class, is our REST service:



package quickpik.server.web;

import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.Date;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;

import org.codehaus.jettison.json.JSONException;
import org.codehaus.jettison.json.JSONObject;

import com.sun.jersey.api.json.JSONWithPadding;

@Path("/quickpik")
public class QuickPikService {

private final static int BUFFER_SIZE_IN_BYTES = 1024;
private final static String FLICKR_API = "http://api.flickr.com/services/feeds/photos_public.gne?format=json&tagmode=all&tags=" ;

@GET
@Path("searchPhotos")
@Produces({ "application/x-javascript", MediaType.APPLICATION_JSON})
public JSONWithPadding getPhotos(@QueryParam("searchExp") String searchExp,
@QueryParam("callback") String callback) {
// "log" the call
System.out.println("[" + new Date() + "] searching for: " + searchExp) ;
URL flickrUrl = getSearchURL(searchExp) ;
JSONObject data = tryToGetSearchResult(flickrUrl);
return new JSONWithPadding(data, callback);
}

private URL getSearchURL(String searchExp) {
String composedURL = FLICKR_API + searchExp;
try {
return new URL(composedURL);
} catch (MalformedURLException e) {
e.printStackTrace();
throw new RuntimeException("URL composition failed. Please check your URL: " + composedURL, e) ;
}
}

private JSONObject tryToGetSearchResult(URL flickrUrl) {
try {
return getSearchResult(flickrUrl) ;
} catch (IOException | JSONException e) {
e.printStackTrace();
throw new RuntimeException("Failed searching.", e) ;
}
}

private JSONObject getSearchResult(URL flickrUrl) throws IOException, JSONException {
InputStream is = flickrUrl.openStream() ;
String result = readDataFromStream(is);
// Flickr specific string prefix for the JSON feed.
if(result.indexOf("jsonFlickrFeed(") >= 0) {
result = result.substring("jsonFlickrFeed(".length(), result.length()-1) ;
return new JSONObject(result) ;
} else {
return new JSONObject("{}") ;
}
}

private String readDataFromStream(InputStream is) throws IOException {
StringBuilder data = new StringBuilder() ;
byte[] buffer = new byte[BUFFER_SIZE_IN_BYTES] ;
int bytesRead = is.read(buffer) ;
while(bytesRead != -1) {
data.append(new String(buffer, 0, bytesRead)) ;
bytesRead = is.read(buffer) ;
}
is.close() ;
return data.toString() ;
}
}



That's it. This our REST service, and we are exposing our API by using the @Path annotations
on the class and on the public method getPhotos. There are also other annotations such as the
@GET annotation and the @Produces annotation which state that the method is invoked on
an HTTP GET, and returns (@Produces annotation) a JSON (with Padding eventually = JSONP) object.

As you can see, in the class, we are using a Flickr API to get our images data from.

Now all that's left in order to make this service work, is to define the Jersey servlet in the web.xml file,
which is done this way:


<web-app>
...

        <servlet>
    <servlet-name>Jersey REST Service</servlet-name>
    <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
    <init-param>
      <param-name>com.sun.jersey.config.property.packages</param-name>
      <param-value>quickpik.server.web</param-value>
    </init-param>
</servlet>

<servlet-mapping>
    <servlet-name>Jersey REST Service</servlet-name>
    <url-pattern>/rest/*</url-pattern>
</servlet-mapping>

...
</web-app>


After doing that, we're able to refer to our REST service by using a URL in the browser:
http://localhost:8080/<web-app-name>/rest/quickpik/searchPhotos?searchExp=<our-search-term>

In order to verify this worked indeed and to get a better understanding of how the strucutre of our JSON result looks like, before even writing code to the GWT Chrome extension client,
i did a small test using jQuery, to see whether this worked. - You can find it in the GitHub repository.
What I did in a nutshell, was to call the REST API:
$.getJSON('http://localhost:8080/qpserver/rest/quickpik/searchPhotos?searchExp=hello&callback=?', jsonCB);

So I executed an AJAX call to search for "hello", and passed a callback - jsonCB to process the result.
jsonCB looks like this:


function jsonCB(result) {
console.log(result);
if (result !== null && result) {
if (result.items.length === 0) {
$($("#serverResponses")[0]).append("No results.")
} else {
for (item in result.items) {
var url = result.items[item].media.m;
var imgItem = document.createElement('img');
imgItem.src = url;
$($("#serverResponses")[0]).append("Server response: ")
.append(url).append("<br>");
$($("#serverResponses")[0]).append(imgItem).append("<br>");
}
}
}
}



Note the highlighted text! - This is how we get the URL information of the photo items being sent back to us from the server.
So now we know that inside "result" which is a JSON object, resides an array of objects, that contain each
an object called media with a field called m.
You can discover this by simply debugging the returned values from the Server in the browser.

So now, we can finally add some code to our GWT Client.


Client side - GWT Chrome extension

In the previous posts of QuickPik, you might have noticed, that in order to add an additional ImageDataSource, all you need is to implement an Interface IImagesDataSource and add that ImageDataSource to the DataSource enum.
It's much easier than it sounds.
So we need to do the following:
1. Write an ImageDataSource that "talks" to our Server.
2. Edit the manifest.json file to allow communication to our server in order to comply with the
    "Content Security Policy" of Chrome's extensions.


IImageDataSource implementation - ServerDS




public class ServerDS implements IImagesDataSource {

private final static String QUICKPIK_SERVER_URL = 
"http://localhost:8080/qpserver/rest/quickpik/searchPhotos?searchExp=" ;

          ...

@Override
public void getImages(final String searchExpression, final FilterLevel filter, 
                                                 final Callback<PhotosSearchResult, Void> callback) {
String url = QUICKPIK_SERVER_URL + searchExpression ;
JsonpRequestBuilder jsonp = new JsonpRequestBuilder();
jsonp.requestObject(url, new AsyncCallback<ServerResult>() {
public void onFailure(Throwable throwable) {
// do some error handling..
}

public void onSuccess(ServerResult result) {
handleLoadedImagesResult(searchExpression, filter, result, callback) ;
}
});
}

private void handleLoadedImagesResult(String searchExpression, FilterLevel filter, ServerResult result, 
Callback<PhotosSearchResult, Void> callback) {
JsArray<ServerImageItem> imageItems = result.getItems();
LinkedList<Photo> photos = collectImages(imageItems);
callback.onSuccess(new PhotosSearchResult(searchExpression, filter, photos, 0, false)) ;
}

private LinkedList<Photo> collectImages(JsArray<ServerImageItem> imageItems) {
LinkedList<Photo> photos = new LinkedList<Photo>();
for (int i = 0; i < imageItems.length(); i++) {
ServerImageItem imageItem = imageItems.get(i);
Photo p = new Photo(i+"", imageItem.getImageURL(), imageItem.getImageURL()) ;
photos.add(p) ;
}
return photos;
}
}


I'll try to explain the code very shortly:
1. At the top of the class you can locate the URL to the server we will be using to get the images from.
2. getImages method which must be implemented is passed a searchExpression a filter and
   a callback. We will ignore the filter to make things simpler.
   GWT offers a class called JsonpRequestBuilder which allows us to invoke an asynchrounous
   call to the server, and passing a callback to deal with the result of the invocation.
3. If the call was successful, we have to handle the result somehow. We do that by calling a handle method.
4. In the handleLoadedImagesResult method, we got an object called ServerResult. 
    ServerResult is a custom object created especially for this operation. It is a wrapper to
    a JavaScriptObject, that gives access to the result.items object array. (Remember from the jQuery test?)

This is how ServerResult looks like:


public class ServerResult extends JavaScriptObject {

   protected ServerResult() {
   }

   public final native JsArray<ServerImageItem> getItems() /*-{
     return this.items ;
   }-*/;
}


The native getItems method, is a convention of GWT to write JavaScript code to the underlying JavaScriptObject that the Java object wraps. So when you invoke getItems, behind the scenes the
JavaScript code "return this.items" is executed, returning the JavaScript items Array.

I also mapped the objects in this array to a Java object in the same way I did with ServerResult.
Following the same idea, ServerImageItem looks like this:


public class ServerImageItem extends JavaScriptObject {

protected ServerImageItem() {
}

public final native String getImageURL() /*-{
    return this.media.m ;
  }-*/;
}


Here we use getImageURL method to return the (JavaScript) object's media.m property.
(This is exactly the same structure that was referred to when testing with jQuery above).


So after this long explanation, all we do in the "handle" method, is basically gathering all of our
images URLs, creating a Photo list, like expected in the callback passed to us in the getImages method
invocation and invoking the callback, passing it an expected PhotoSearchResult object containing
our Photos.

5. Now we must add our ServerDS to the DataSource enum, so it will be included as a data source
    when we run a search.
    We do that simply by adding the (bolded) line below:

public enum DataSource {

// add here your data sources
FLICKR(false, new FlickrDS()),
QUICKPIK_SERVER(true, new ServerDS())
;
         ....
}

Notice how I intentionally turned off FLICKR Data Source, setting its isEnabled flag to false.

6. Last thing that needs to be done, is to enable our Chrome extension to make calls to our server by
    editing the manifest.json file, adding the following line to it:

{
   ....
   // A relaxed policy definition which allows script resources to be loaded from localhost:8080 over HTTP
  "content_security_policy": "script-src 'self' http://localhost:8080; object-src 'self'"
}


That's all folks, all you need to do now, is build the Server (I did it with Gradle in this project), compile
your GWT client, deploy it on Chrome and watch it work.

You can access all the code on GitHub, and download it.

Code on GitHub
All of the project's code can be found on GitHub at: https://github.com/nirgit/Quickpik-with-server



Wednesday, November 28, 2012

GWT HTML5 Game Example - Pong

Hi everyone,

This post is intended to present a very simple example of a game, made with the HTML5 support of GWT.
The game uses only GWT, without any other 3rd party libraries such as PlayN which works with GWT and is rather famous and meant for creating games based on HTML5.
Take a look at PlayN's website: http://code.google.com/p/playn/

GWT provides an API to the underlying HTML5 Canvas, and lets you use it to create
custom graphics.

This example shows the game of Pong. Though the game might be a little buggy, it nevertheless makes use of the fundamental HTML5 Canvas features, such as drawing simple geometric shapes, such as rectangles, circles, gradients and with those elements creating a very simple and compact code for a game.

You can find all the code on GitHub for you to clone: https://github.com/nirgit/GwtPong

Click the screenshot to play:




Have fun playing!

Sunday, August 26, 2012

GWT Chrome extension using version 2 manifest

Hi everyone,

A while ago I wrote a small post about how to create Chrome extension using GWT, claiming
you could use GWT to make powerful Chrome extension.
One of the comments was regarding the usage of the manifest version.
When you write a Chrome extension, you must provide with a file called manifest.json which
serves as some kind of descriptor file to your extension.
It contains all sorts of things, such as the name of the extension, description, icon, action,
javascript files involved, and so on...

One of the entries in that file is the version entry.
As for today Chrome supports the current version - 2, and also supports (for the moment)
version 1.
However, this is changing, and Google already announced that they will stop supporting version 1
in the future (see http://developer.chrome.com/extensions/manifest.html#manifest_version), due
to security reasons.

So I was asked whether it was possible or how is it possible to deploy the GWT extension with the
version 2 attribute in the manifest.json file.

Version 2, is stricter in terms of security than version 1. Especially when it comes to using external JavaScript files.
You can read about security concerns over here:
http://developer.chrome.com/extensions/content_scripts.html#security-considerations
http://developer.chrome.com/extensions/xhr.html

One of the problems I faced, was the inline scripting I had (JavaScript showing inside the HTML).
That problem came not only on my "Hosting" HTML file, but also on the compiled outputs of GWT,
where HTML file which was needed was created and contained JavaScript code.
When that happened, I couldn't execute the extension on Chrome using version 2 in the manifest.json file.

I wrote in response to a comment someone left regarding this problem, that in case Google would have a way to separate JavaScript files from HTML, then problems would be solved.
Back then, I didn't see such options for the GWT compiler.

However, after doing some more searching, I found out that it does exist (!).
And so by adding a simple line in the .gwt.xml file of the application, you can compile all
your GWT application output into a single JavaScript file. Meaning - no HTML files containing
inline JavaScripts.

The line one needs to add is:
...
<add-linker name="sso" />
...

I added it to my Quickpik.gwt.xml file, and so when you run the GWT compiler, you get at the end
one JavaScript file containing the entire code of your app. Awesome!

You can access the entire code on Github and download it from there:
https://github.com/nirgit/Quickpik

You can simply click on the "Downloads" on the right hand of the screen, in order to download
the repository as a ZIP file.

In case you're a GIT user, my suggestion is that you just clone the repository. It's really small.
Simply open your GIT shell and type:
git clone https://github.com/nirgit/Quickpik.git

The extension is already ready to be used in Chrome, and you can deploy it by opening Chrome's
Tools -> Extensions tab, and checking the "Developer mode" box.
Afterwards, you will be able to deploy the extension by selecting the "Load unpacked extension..."
option and selecting the war directory of Quickpik.

I am hoping this is somewhat useful.
I think GWT is an absolutely great tool, and one that can be a lot of fun writing Chrome Extensions.

Until next time!




Monday, July 9, 2012

Quickpik - GWT Chrome extension - Search photos



As a continuation to my previous post, this isn't about discussing anything.
It is about doing - giving an example of something quite simple & cool which you could do yourself using GWT, to build a chrome extension.

Being a bit of a GWT enthusiast, I built an extension for Google Chrome called QuickPik - which allows users to search for images from right from the toolbar.

Quickpik is using Google Images & Flickr (Yahoo) APIs.
You can run a simple search, and get some images as results. It's really simple.

You can just download the zip file from the link below, extract it and install the .crx file right into Google Chrome by dragging the file into it. If you're scared of some evil code running on your browser - just open the source code in the zip file, build the extension yourself, and then install it.

Keep in mind that the idea of this post is to show how easy it is to create a
cool extension for Chrome.
The code might not be extremely well documented, nor does it strictly follow Google's best practices like the MVP pattern or bundling CSS or Image resources and so on, but...

You can expect something compact and simple enough once you open it and take a look.
I don't think there should be a problem understanding much of the code.

Would love to get feedback if this helped anyone.

Both source code & Chrome extension below are available to download in one zip file.

Have fun !!!

The project on Github:  https://github.com/nirgit/Quickpik
A Zip file to download:   http://www18.zippyshare.com/v/16094412/file.html





Sunday, April 29, 2012

Spring ROO 1.2 review

This post as the title mentions, is about my personal impressions and thoughts about the 
'Spring ROO' framework, version 1.2.1.

What is ROO?
ROO is a tool, which helps automating the development process of web-applications written in Java.
It supports different kinds of technology stack, such as GWT & Spring MVC in the front end, JPA & Hibernate, EclipseLink in the back-end and such.
The framework is entirely based on the Spring framework, in order to accomplish the goal of creating a full-blown, end-to-end web-application.
The idea is to be able to rapidly create and maintain a web-application with a few 'shell' commands, and then let the users (the developers) fill in the gaps and customize business logic, application state and so on.

ROO monitors the changes the developer makes while writing code, and adapts the application's infrastructure to match and support the developer's actions.

I recommend you visit the ROO website in any case to get the best understanding of the tool.

Before I start telling you what I did and experienced I have to say that I enjoyed working with ROO.
The installation was fast & easy, the ROO shell was easy to use and the documentation was satisfying.

So! after some short-long intro, let me tell you -

What did I do?
1. Created a simple web-application based on a multi-module project - Server & Client modules.
2. Used GWT as the front-end, JPA over Hibernate with a DB in the back-end.
3. Wrote custom code and integrated it in the auto generated files of ROO.
4. Tried to change/add/remove a service (BL) & repository (DAO).
5. Inspected generated code.
6. Tried to remove project's dependency from ROO.

What I experienced
 1. The amount of code generated for the client module (GWT) was quite overwhelming. So many files were created to show/edit/create/delete a single entity named Product in the DB. While some of the generated code is absolutely necessary, a lot of code was created to support every possible option on the front-end.

Unfortunately, this is not configurable, and I could not find a way to tell ROO exactly what I wish to generate ('scaffold'), like - "only desktop support", "only mobile support", "only view/read" capabilities, etc... You can only pick which Entities on your model to scaffold (support on the front-end and back-end). All this leads to a very big codebase, which won't necessarily be used.
This leads to verbosity and a lot of redundancy, which makes your codebase larger and
more complex for no reason.

2. Front end's design - Is following Google's best practices which is really great. You can learn
quite a bit of Google's understanding the design of a web-application on the front end.

3. Generated code looks fairly neat, however all generated code managed by ROO is "hidden" in AspectJ files. - Some people don't like this.

4. No easy removal* of ROO from the project in my opinion... - You could just stop ROO from monitoring your application and continue developing your project. But, all the generated AspectJ files remain. This could be a problem for the future, in case you want to change your classes' implementation. (* - Please see comments at the bottom).

5. Writing tests & integrating them with Services layer was easy.

6. Great end-to-end integration tests, that run on the Browser using Selenium (some kind of a web-driver script player).
It feels to me that even though ROO's GWT add-on tried very hard to give a real added value to developers, it added more complexity than simplicity, and that is in my opinion due to the fact that 
a lot of code is being generated. And that makes it hard in the beginning to dive into the code.

Not to mention how much tweaking needs to be done to disable features for instance.
And in case you're letting ROO manage your project further, you will have to keep track of that
stuff all the time. So if you 'messed' up by scaffolding some entity which you didn't want, you'll spend
your time removing the unwanted code. where is the Undo feature of the shell?
ROO just doesn't feel as fine grained as I would expect it to be.

Conclusion
I think that ROO is walking in the right direction, empowering developers to deal only with the logic of their web applications and not trouble themselves with infrastructure and design problems of standard web-applications.
I am not sure I would use it (especially not the front-end) in production, yet.
Despite all that, it still benefits as a really nice tool for prototyping (or quickly setting up a back-end & DB configuration), and for learning how to implement a web app with best practices.

What can you profit from it:
1. It is an amazing tool to create a web-app POC very fast!
2. It is an awesome tool to learn new technologies and see how they work with each other.
3. Spring ROO applies best practices.
Explanation: ROO has a 'plug-ins' platform, which lets anyone to generate code following a simple command in the shell. 
Google for instance wrote such a plug-in for ROO to integrate with GWT. As a result you get the BEST, latest results for architectural/design/technology/practices on your web-app.
4. It's a great tool to get your backend Spring based infrastructure ready with a DB, in a few minutes, or see how it should be done properly.
5. Runtime performance - Best runtime performance you could have achieved yourself probably.
All generated code/configuration is processed in compile time as regular Java files only!
Following best practices of Spring & other vendors like Google, you can rest assure that there should be no performance issues caused by this tool. (or with very little chance for it to happen - even the best software giants have bugs sometimes).
6. Did I mention the great integration tests running automatically on the browser?
I personally loved this feature!

I think it would be wise to keep an eye on ROO for the future, as it has already been out there for some time, and keeps on improving.

Visit 'Spring ROO': http://www.springsource.org/spring-roo