Pages

Sunday, June 16, 2013

Lottery for winning a copy of the "Android NDK Cookbook" is now opened

In continuation to last week's post for winning a free copy of the eBook "Android NDK Cookbook",
the lottery starts today!

3 Lucky people will be given a free copy of the "Android NDK Cookbook" sponsored by "Packt Publishing".


Description of the book

 Android Native Development Kit Cookbook will help you understand the development, building, and debugging of your native Android applications. You will discover and learn JNI programming and essential NDK APIs such as OpenGL ES, and the native application API. You will then explore the process of porting existing libraries and software to NDK. By the end of this book you will be able to build your own apps in NDK apps. 

Check it out on Packt Publishing: http://www.packtpub.com/android-native-development-kit-cookbook/book


How to participate?

Simply send an email by clicking here or by sending it to 
containing a subject line "Android NDK Cookbook lottery".




Deadline

The contest will close on 30th June 2013. Winners will be contacted by email, so be sure to use 
your real email address! 



Good luck !


Saturday, June 8, 2013

Win a free eBook copy of "Android NDK Cookbook"

Are you an Android developer or interested in Android development?
This might interest you!

On the 16th of June 2013 I will conduct a lottery on the blog, to win a free eBook copy of "Android NDK Cookbook".

The lottery will last for 2 weeks (until June 30th) where you will be able to participate to win a free copy of the book.

Description of the book:

"Android Native Development Kit Cookbook" will help you understand the development, building, and debugging of your native Android applications. You will discover and learn JNI programming and essential NDK APIs such as OpenGL ES, and the native application API. You will then explore the process of porting existing libraries and software to NDK. By the end of this book you will be able to build your own apps in NDK apps. 




Further details of the book can be found here: http://www.packtpub.com/android-native-development-kit-cookbook/book


Stay tuned for the 16th of June!


* The copy of the book is sponsored thanks to "Packt Publishing".

Sunday, May 26, 2013

Javascript Closure Functions Memory Model - How does it work?

Hey everyone,
For those of you who are writing code in Javascript or wish to do so, it is essential to understand
the concept of closures, and how they work.

I must say that when I started looking into it, it was somewhat difficult to find proper material
on the web, to get a good grasp of how things work. So this post's idea is to try and shed some
light on this matter. I don't guarantee that this will make you understand everything, but do hope
it will help in some way.

*Note: I could have some mistakes in this post of course, and would be very happy if you send me
any corrections or comment as this is suppose to help people, and show the power of Javascript closures. 
Everything that's written in this post is according to my understanding of how things work.
So regard this writing with care. I tried to do the best I could when I wrote this post.



The post topics will contain 2 parts:
1. What is a Closure?
2. How does the memory model look like.


1. What is a Closure?

"... is a function or reference to a function together with a referencing environment—a table storing a reference to each of the non-local variables (also called free variables) of that function. A closure— unlike a plain function pointer—allows a function to access those non-local variables even when invoked outside of its immediate lexical scope." - From Wikipedia.

A simple example to such Closure is a counter.
Consider the following script for example:

function createCounter() {
       var counter = 0 ;
       return function() {
              counter++ ;
              return counter ;
       }
}
 
var counter = createCounter() ;
counter() ;
// returns 1
counter() ; // returns 2
var counter2 = createCounter() ;
counter2() ; // returns 1
counter() ; // returns 3


How does this work?
The createCounter function being executed creates a new Execution Scope, which contains references to a variable named counter and returns an anonymous function.

So how does that function know about counter when it's executed?
In Javascript any function, has some kind of property called a "Scope Chain", which defines the scopes the function is bounded to. When we define a function in Javascript, the "definition scope" is such scope that is stored in the function's "Scope chain". So when we define and return the anonymous function in createCounter that anonymous function is binded to a scope where it was defined.
Thus, when the function is being executed by calling "counter()" a new "execution scope" is being created and added to the function's scope chain. So when the function needs to look up counter it looks up it's "scope-chain", and finds it eventually on the "definition scope" of the function.

Another thing to remember is that every time you run a function you run it inside a "context".

What is a context?
A context is the object the function is executed on. It is recognized with the this keyword.
It is defined when we write the following code in Javascript for example:

// context here equals to obj1 (this === obj1)
obj1.sayHello() ;

// context here equals to the global object which in case of a browser, is *window (this === window)
sayHello() ;

* It might be equal to undefined (this === undefined) in case we're running in 'strict mode'.

You can check more about it at MDN - Function Context.




2. How does the memory model look like?

I think it's important to understand how the memory model looks like, graphically, to get a better
notion of how things work. Even though you could handle without it, I find illustrations very
helpful understanding how something works.


Simple example

When you define a regular function, like:
function foo(number) {
   [ function_body ]
}

We get something of the following in memory:


If we execute foo(3), we will get something of the following in memory:


So when foo(3) is executed, a new "Activation object" is created.
Passing to it the function's parameters and arguments array, and adding it to the scope chain of foo(3).
So the next time foo will be looking up for the "num" parameter, it will be looking it up the
scope chain starting from the closest(last) scope that is relevant to the execution, which in this
case is of course the "Activation". If it won't find num there, it will continue to the next scope in the
chain, until it finds it.


Closure example

Lets take the Closure example we referred to earlier - the counter function and see how it 
would look like in memory.

Quick reminder, our counter function looks like this:

function createCounter() {
       var counter = 0 ;
       return function() {
              counter++ ;
              return counter ;
       }
}

This maps to a memory model which looks (somewhat) like this:

counter javascript closure
You might wonder why the Closure's scope chain also refers to the Global Object.
The reason is that when we create a closure, it copies the scope chain of the execution context scope chain which in this case belongs to createCounter().
What makes the Closure a closure is the interesting side affect in Javascript, that instead of the Activation Object being destroyed after createCounter execution, it remains alive, because there is a reference to it, from another function - the closure.

Now, lets create such counter by executing the following:

var myCounter = createCounter();  [1]
myCounter() ;                     [2]

So on line [1] we create myCounter which is a reference to the closure on the above illustration.
When we execute line [2], we get 1 as an answer, and the second time we will get 2.
How it looks in the memory is shown in the next illustration:




When we execute myCounter which is our closure in this case, we are incrementing the counter variable, but in order to do so, the variable needs to be reached somehow.
This is where the "scope chain" takes its place. The first "environment" where the variable is going to be looked up is at the top of the scope chain, in our case - the "Activation Object" of myCounter.
Since the variable is not an inner variable, nor a parameter, the look-up for the variable continues, and moves on to the next scope in the chain which is the "Activation Object" of createCounter ! There it is found, read and assigned. (As counter++ is actually counter = counter + 1 ).

Note that every time you call createCounter a new Activation Object is created with a new counter variable which equals to 0. This is how you can create as many separate counters as you like, without them affecting each other.


I hope this helped you in some way to understand the memory model of functions and closures a little better. Any comments / suggestions and complaints are most welcome.
Enjoy "closuring".

Friday, March 8, 2013

GWT Chrome extension with JSONP Server communication

I thought this post would be a good idea for people who want to know how to make an 'end-to-end'
GWT Chrome extensions, with a connection to a server of their own.

While this demonstrates specifically how to enable to communication from a Chrome extension, there is
no difference in code if you wish to deploy your client code else where (not on Chrome).
Many web applications choose to deploy their Server code & Client code on the same application server, like JBoss or Tomcat.

As you know in GWT there are several ways to communicate with a server.
One of them is RPC for instance, which gives you the ability to invoke methods on a Java interface,
not having to deal with anything but Java code. Which is cool, as it makes a very smooth transition from
the client side code to the server side code.

However, when we want to implement this in a Chrome extension things get a bit trickier.
The reason is that you'd have to separate your server module from your Chrome extension module
(as we can't deploy Server side classes to Chrome - it runs only JavaScript),
and would need a shared model between the two in order to communicate.
It's not hard to do at all, it's only a longer example to make.
(In Eclipse, if you create a new "Web Application Project" using Google's plugin, you will have the client & server code in the same project, with a "shared" directory which serves a model bridge between the two).

Another option that exists, is using JSONP communication to the server.
In our case - this makes it a lot simpler.
We will expose some REST API on a Server we'll implement, for getting images, and send a JSONP request from the QuickPik extension to retrieve those images. Obviously, it can be anything else you want it to be in your application.

TO THE CODE!!!

The Server

In order to create a REST service and expose it, we will use the Jersey library which is an implementation for
building RESTful web services. It's very simple.

The following QuickpikService class, is our REST service:



package quickpik.server.web;

import java.io.IOException;
import java.io.InputStream;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.Date;

import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.MediaType;

import org.codehaus.jettison.json.JSONException;
import org.codehaus.jettison.json.JSONObject;

import com.sun.jersey.api.json.JSONWithPadding;

@Path("/quickpik")
public class QuickPikService {

private final static int BUFFER_SIZE_IN_BYTES = 1024;
private final static String FLICKR_API = "http://api.flickr.com/services/feeds/photos_public.gne?format=json&tagmode=all&tags=" ;

@GET
@Path("searchPhotos")
@Produces({ "application/x-javascript", MediaType.APPLICATION_JSON})
public JSONWithPadding getPhotos(@QueryParam("searchExp") String searchExp,
@QueryParam("callback") String callback) {
// "log" the call
System.out.println("[" + new Date() + "] searching for: " + searchExp) ;
URL flickrUrl = getSearchURL(searchExp) ;
JSONObject data = tryToGetSearchResult(flickrUrl);
return new JSONWithPadding(data, callback);
}

private URL getSearchURL(String searchExp) {
String composedURL = FLICKR_API + searchExp;
try {
return new URL(composedURL);
} catch (MalformedURLException e) {
e.printStackTrace();
throw new RuntimeException("URL composition failed. Please check your URL: " + composedURL, e) ;
}
}

private JSONObject tryToGetSearchResult(URL flickrUrl) {
try {
return getSearchResult(flickrUrl) ;
} catch (IOException | JSONException e) {
e.printStackTrace();
throw new RuntimeException("Failed searching.", e) ;
}
}

private JSONObject getSearchResult(URL flickrUrl) throws IOException, JSONException {
InputStream is = flickrUrl.openStream() ;
String result = readDataFromStream(is);
// Flickr specific string prefix for the JSON feed.
if(result.indexOf("jsonFlickrFeed(") >= 0) {
result = result.substring("jsonFlickrFeed(".length(), result.length()-1) ;
return new JSONObject(result) ;
} else {
return new JSONObject("{}") ;
}
}

private String readDataFromStream(InputStream is) throws IOException {
StringBuilder data = new StringBuilder() ;
byte[] buffer = new byte[BUFFER_SIZE_IN_BYTES] ;
int bytesRead = is.read(buffer) ;
while(bytesRead != -1) {
data.append(new String(buffer, 0, bytesRead)) ;
bytesRead = is.read(buffer) ;
}
is.close() ;
return data.toString() ;
}
}



That's it. This our REST service, and we are exposing our API by using the @Path annotations
on the class and on the public method getPhotos. There are also other annotations such as the
@GET annotation and the @Produces annotation which state that the method is invoked on
an HTTP GET, and returns (@Produces annotation) a JSON (with Padding eventually = JSONP) object.

As you can see, in the class, we are using a Flickr API to get our images data from.

Now all that's left in order to make this service work, is to define the Jersey servlet in the web.xml file,
which is done this way:


<web-app>
...

        <servlet>
    <servlet-name>Jersey REST Service</servlet-name>
    <servlet-class>com.sun.jersey.spi.container.servlet.ServletContainer</servlet-class>
    <init-param>
      <param-name>com.sun.jersey.config.property.packages</param-name>
      <param-value>quickpik.server.web</param-value>
    </init-param>
</servlet>

<servlet-mapping>
    <servlet-name>Jersey REST Service</servlet-name>
    <url-pattern>/rest/*</url-pattern>
</servlet-mapping>

...
</web-app>


After doing that, we're able to refer to our REST service by using a URL in the browser:
http://localhost:8080/<web-app-name>/rest/quickpik/searchPhotos?searchExp=<our-search-term>

In order to verify this worked indeed and to get a better understanding of how the strucutre of our JSON result looks like, before even writing code to the GWT Chrome extension client,
i did a small test using jQuery, to see whether this worked. - You can find it in the GitHub repository.
What I did in a nutshell, was to call the REST API:
$.getJSON('http://localhost:8080/qpserver/rest/quickpik/searchPhotos?searchExp=hello&callback=?', jsonCB);

So I executed an AJAX call to search for "hello", and passed a callback - jsonCB to process the result.
jsonCB looks like this:


function jsonCB(result) {
console.log(result);
if (result !== null && result) {
if (result.items.length === 0) {
$($("#serverResponses")[0]).append("No results.")
} else {
for (item in result.items) {
var url = result.items[item].media.m;
var imgItem = document.createElement('img');
imgItem.src = url;
$($("#serverResponses")[0]).append("Server response: ")
.append(url).append("<br>");
$($("#serverResponses")[0]).append(imgItem).append("<br>");
}
}
}
}



Note the highlighted text! - This is how we get the URL information of the photo items being sent back to us from the server.
So now we know that inside "result" which is a JSON object, resides an array of objects, that contain each
an object called media with a field called m.
You can discover this by simply debugging the returned values from the Server in the browser.

So now, we can finally add some code to our GWT Client.


Client side - GWT Chrome extension

In the previous posts of QuickPik, you might have noticed, that in order to add an additional ImageDataSource, all you need is to implement an Interface IImagesDataSource and add that ImageDataSource to the DataSource enum.
It's much easier than it sounds.
So we need to do the following:
1. Write an ImageDataSource that "talks" to our Server.
2. Edit the manifest.json file to allow communication to our server in order to comply with the
    "Content Security Policy" of Chrome's extensions.


IImageDataSource implementation - ServerDS




public class ServerDS implements IImagesDataSource {

private final static String QUICKPIK_SERVER_URL = 
"http://localhost:8080/qpserver/rest/quickpik/searchPhotos?searchExp=" ;

          ...

@Override
public void getImages(final String searchExpression, final FilterLevel filter, 
                                                 final Callback<PhotosSearchResult, Void> callback) {
String url = QUICKPIK_SERVER_URL + searchExpression ;
JsonpRequestBuilder jsonp = new JsonpRequestBuilder();
jsonp.requestObject(url, new AsyncCallback<ServerResult>() {
public void onFailure(Throwable throwable) {
// do some error handling..
}

public void onSuccess(ServerResult result) {
handleLoadedImagesResult(searchExpression, filter, result, callback) ;
}
});
}

private void handleLoadedImagesResult(String searchExpression, FilterLevel filter, ServerResult result, 
Callback<PhotosSearchResult, Void> callback) {
JsArray<ServerImageItem> imageItems = result.getItems();
LinkedList<Photo> photos = collectImages(imageItems);
callback.onSuccess(new PhotosSearchResult(searchExpression, filter, photos, 0, false)) ;
}

private LinkedList<Photo> collectImages(JsArray<ServerImageItem> imageItems) {
LinkedList<Photo> photos = new LinkedList<Photo>();
for (int i = 0; i < imageItems.length(); i++) {
ServerImageItem imageItem = imageItems.get(i);
Photo p = new Photo(i+"", imageItem.getImageURL(), imageItem.getImageURL()) ;
photos.add(p) ;
}
return photos;
}
}


I'll try to explain the code very shortly:
1. At the top of the class you can locate the URL to the server we will be using to get the images from.
2. getImages method which must be implemented is passed a searchExpression a filter and
   a callback. We will ignore the filter to make things simpler.
   GWT offers a class called JsonpRequestBuilder which allows us to invoke an asynchrounous
   call to the server, and passing a callback to deal with the result of the invocation.
3. If the call was successful, we have to handle the result somehow. We do that by calling a handle method.
4. In the handleLoadedImagesResult method, we got an object called ServerResult. 
    ServerResult is a custom object created especially for this operation. It is a wrapper to
    a JavaScriptObject, that gives access to the result.items object array. (Remember from the jQuery test?)

This is how ServerResult looks like:


public class ServerResult extends JavaScriptObject {

   protected ServerResult() {
   }

   public final native JsArray<ServerImageItem> getItems() /*-{
     return this.items ;
   }-*/;
}


The native getItems method, is a convention of GWT to write JavaScript code to the underlying JavaScriptObject that the Java object wraps. So when you invoke getItems, behind the scenes the
JavaScript code "return this.items" is executed, returning the JavaScript items Array.

I also mapped the objects in this array to a Java object in the same way I did with ServerResult.
Following the same idea, ServerImageItem looks like this:


public class ServerImageItem extends JavaScriptObject {

protected ServerImageItem() {
}

public final native String getImageURL() /*-{
    return this.media.m ;
  }-*/;
}


Here we use getImageURL method to return the (JavaScript) object's media.m property.
(This is exactly the same structure that was referred to when testing with jQuery above).


So after this long explanation, all we do in the "handle" method, is basically gathering all of our
images URLs, creating a Photo list, like expected in the callback passed to us in the getImages method
invocation and invoking the callback, passing it an expected PhotoSearchResult object containing
our Photos.

5. Now we must add our ServerDS to the DataSource enum, so it will be included as a data source
    when we run a search.
    We do that simply by adding the (bolded) line below:

public enum DataSource {

// add here your data sources
FLICKR(false, new FlickrDS()),
QUICKPIK_SERVER(true, new ServerDS())
;
         ....
}

Notice how I intentionally turned off FLICKR Data Source, setting its isEnabled flag to false.

6. Last thing that needs to be done, is to enable our Chrome extension to make calls to our server by
    editing the manifest.json file, adding the following line to it:

{
   ....
   // A relaxed policy definition which allows script resources to be loaded from localhost:8080 over HTTP
  "content_security_policy": "script-src 'self' http://localhost:8080; object-src 'self'"
}


That's all folks, all you need to do now, is build the Server (I did it with Gradle in this project), compile
your GWT client, deploy it on Chrome and watch it work.

You can access all the code on GitHub, and download it.

Code on GitHub
All of the project's code can be found on GitHub at: https://github.com/nirgit/Quickpik-with-server



Wednesday, January 30, 2013

RPM example - Application packaging

This post is about something quite different than the others.
It concerns packaging your software as an RPM file making it ready for delivery.
If you have never used RPM before and/or you're not an experienced Linux user, this post is intended for you.

I'm far from being an expert in this area, and I would guess there is probably a better way of achieving the goal I want to show here. Please share your thoughts and experience if you have any.

I assume you already heard of RPM, and understand it's benefits and so on.
This post is not to convince you to use RPM, only how to create such RPM file which you could use.

What's in this example?
I created a "Hello World" Java application, and using Ant, I compile it, and Jar it.
This application could be anything you want - this is just a "place-holder" for you, to replace with
your real stuff.

So, after having an RPM file, we'll be able to install it to the system and remove it afterwards.
The path which the program will be installed into will contain the "Hello World" application Jar.

RPM Structure quick overview

As you probably know by now, RPM files are created using a ".spec" file.
This file tells the rpmbuild tool which we will use how to create the RPM package.
The .spec file contains several sections:

  1. Headers - Headers of the spec file contain stuff like a Summary of what the package is,
    Name of the package, Version & Release information, License & Package and an
    attribute named BuildRoot. BuildRoot is important - it specifies the location
    of where your package will be temporarily installed.
    What I mean by temporarily, we'll get to later.
  2. %description step - This is where you can type just about anything you want describing
    your awesome application.
  3. %prep step - In the step, preparing your Source files to be built (using Ant in our example).
    "What's to prepare?" - you ask? - Well, in order to build an RPM package, you must provide
    RPM a .tar file or a zipped file of some kind to it's SOURCES directory, (I'll explain the
    structure of RPM further down), so what you need to do in this step is to extract your
    archived sources.
  4. %build step - This is where the actual build of your application happens. In our case, we'll
    execute ant on our build.xml file, that compiles and Jars the application - real simple.
  5. %install step - This is the step that happens after the %build step, and will determine how
    your installed package will look like. We will copy the necessary binaries (Jar in our case) to a relevant directory.
  6. %files section - This section determines which files from the %install step to pack, and with what access rights.
  7. %clean section - This section runs after packaging was done, and cleans up all the leftovers from your build and your install steps. It's important to delete all the old files, especially if you build several RPM packages, you don't want to have files of another RPM build process in your way.

Okay. I know this still doesn't tell you how to do stuff, but this introduction is quite important in my opinion.
So take a deep breathe one last time, and we'll go over the rpmbuild directory structure and understand the steps we need to take in order to accomplish our mission.

"rpmbuild" Directory structure

An important part of the way RPM works, is its directory structure where work is being done.
You should be able to locate your rpmbuild directory on your machine, which usually should
show under your home directory.

The rpmbuild directory contains the following sub-directories:
  1. BUILD - This is the directory where your sources will be built, and where the artifacts of your build will be located. The %build step, reads from the directory and writes to it. 
  2. BUILDROOT - This is the directory where you will "temporarily" install your binaries to.
    You can think of it as the "INSTALL" directory. After the build step, the BUILDROOT
    serves as a directory to contain the files in a given directory structure. Those files in this
    specified directory structure will be installed afterwards using RPM.
    To clarify: If you place under BUILDROOT a directory called my-app-123 then on the
    real install of the RPM package, your package will be installed under /my-app-123.
    This step reads from the BUILD directory and writes to the BUILDROOT directory.
  3. RPMS - This is the directory that will contain your RPM package in the end.
  4. SOURCES - This is the directory where you should place your archived sources.
  5. SPECS - This is the directory where you should place your .spec file which will create
    the RPM package.
  6. SRPMS - This is a directory where source RPMs are created and stored. 

Each of those directories, can be referenced in the .spec file using RPM variables.
Variables
%_specdir~/rpmbuild/SPECS
%_sourcedir~/rpmbuild/SOURCES
%_builddir~/rpmbuild/BUILD
%_buildrootdir~/rpmbuild/BUILDROOT
%_rpmdir~/rpmbuild/RPMS
%_srcrpmdir~/rpmbuild/SRPMS

Now that we finished our overview. Let's go over our 8 steps to create a packaged application with RPM.
  1. Create an application (write source code) - in our case, a simple "Hello World" application.
  2. Create the build file for the application - Ant's build.xml file in our case.
  3. Create an archive from our application's sources.
  4. Place the archive (tar) at ~/rpmbuild/SOURCES directory.
  5. Copy the .spec file into the ~/rpmbuild/SPECS directory.
  6. Create the RPM package using: rpmbuild -ba <spec_file>
  7. Install the package.
  8. Uninstall the package.
Now, lets go a bit into details:
  1. You can download the pre-made application entirely from GitHub: rpm-example-project.zip
    in order to skip steps 1,2 and 3.
    (You can also access the repository - http://github.com/nirgit/RPM-Project-Example).
  2. After downloading the zipped application, and extracting it somewhere on your machine,
    you can take a closer look into its hierarchy. You will find under the Application directory,
    a directory called src which contains the sources of our application, and you will find
    the build.xml file which builds the application.
    You will also find 2 other files - the application.tar containing the application sources,
    and the project.spec file.
  3. You need now to copy the .spec file into the ~/rpmbuild/SPECS directory, and the
    application.tar file into the ~/rpmbuild/SOURCES directory.
  4. In order to create the package now, you're required to run the rpmbuild tool, by
    executing: > rpmbuild -ba ~/rpmbuild/SPECS/project.spec
  5. Since I use Ubuntu, I also use alien to install the package. In case you're running on
    a Linux flavor such as CentOS, I think you can simply run the regular RPM install.
    With alien: > sudo alien -i ~/rpmbuild/RPMS/i386/Example-RPM-Project-1.0-1.i386.rpm
    Without: > rpm -i ~/rpmbuild/RPMS/i386/Example-RPM-Project-1.0-1.i386.rpm
  6. After the install you should be able to find the package installed under the root:
    /Example-RPM-Project-1.0-1
    Under it, you can find the app.jar.
  7. You can uninstall the package using RPM's command: rpm -e <package_name> 
    or (in my case using Ubuntu): sudo dpkg --remove example-rpm-project

The SPEC file
This is the SPEC file associated with the RPM package.
I hope it will serve you as a good starting point for your own SPEC file.


################################################################
#
# This is an example of a simple RPM spec file.
#
################################################################

Summary: An RPM Spec example
Name: Example-RPM-Project
Version: 1.0
Release: 1
License: Apache 2.0
Group: Applications/Sample
URL: http://www.mycompany.com
Packager: Nir Moav <getnirm@gmail.com>
BuildRoot: %{_buildrootdir}/%{name}-%{version}-%{release}

%description
This is a sample SPEC file for the RPM project
demonstrating how to build, package, install(deploy)


%prep
# extract the tar file containing all the sources, to the build directory.
tar -xvf %{_sourcedir}/*.tar -C %{_builddir}


%build
echo "Building the project..."
cd Application
# running ant to build the java project (could be make/maven/gradle or anything else).
ant


%install
# This is the hierarchy which is going to be inside the package (RPM/Deb) eventually.
echo "Install phase..."
mkdir -p %{buildroot}/%{name}-%{version}-%{release}
cp -R %{_builddir}/Application/output/jars/* %{buildroot}/%{name}-%{version}-%{release}


%post
#This runs post the install - maybe you want to execute the application already then
echo "Post install.."


%postun
#this runs after the uninstall
echo "Post Uninstall..."


%files
# tells which files to contain in the package and with what access rights
# the triplet contains of (<file mode>, <user>, <group>). Make the necessary changes.
%defattr(-,nir,nir)
/*


%clean
# Clean up! Must run this! after build and install steps execute, this will make
# sure that the directories are remained clean, so in case you're building another
# package, you don't want to pack the previous build's artifacts.
rm -rf %{_builddir}/*
rm -rf %{_buildrootdir}/*



That's all folks!
I hope you managed to read through this long post, and found it useful.

Enjoy,
Nir