musings between the lines

there's more to life than code

authentication module

| Comments

tl;dr: Run it yourself in 10 seconds flat via GitHub and Maven: Authentication Module

The introduction post sort of served as a primer about why I wanted to create this module. This post is going to talk about technical side of the module and for what it should (and shouldn’t) be used. Or something like that. We’ll see where it went once I’m done writing it.

Authentication Module

Every site that wants to deliver customized content needs a way to recognize the user. The more customized and personal, the more likely you’ll want actual user identification rather than just plain browser recognition.

I figured that most sites I want to build will have some element of user identification, I may as well write a module that I can drop in and get going without having to worry too much about recreating it from scratch each time. Thus here we are, a Java servlet based module that will allow the developer to drop in a few servlets, modify a configuration file or two, and have a bare bones site that handles user logins from a variety of 3rd party systems.

Java

Yeah, it’s what I like, and what I still think is pretty relevant when it comes to server side content. I debated if I wanted to try something new, but I think for the backend, I’m comfortable with Java and it provides well in terms of support and being able to find and use libraries. I’ll save my experimenting for front end work.

Concept

The idea is to have a system that will allow the developer to add 3rd party login support by trivially calling some servlets and inserting provided Javascript. I want to get away from having to rely on some deep configuration of the host platform for user and password storage. It just makes things too complicated since most of the time, all you really want is to know who the user is and if they have been validated so that you can get/set their preferences and create a customized experience for them.

I also want to have this divorced from any particular system or infrastructure so that it can just run. No need for anything WebSphere or Tomcat or Glassfish specific or any deep configuration of LDAP or JAAS or some security module. Just alter a property file and you’re good to go. Now, is this the right way to do it? Well, that can be debatable especially since these servlet systems do offer a lot of robust options, but I also just want to give creating this project a shot.

In the end, it’s meant to serve needs that I’m identifying during site creation, so that’s my driving force. Along the way, I hope that it may be of help to others who want a bootstrap base to explore and jump from. We’ll see how well that gets accomplished.

Flow

The basic idea is to:

  • Have a screen with a bunch of login buttons.
  • Hook those buttons onto a servlet or javascript as needed.
  • Upon user click, initiate the needed redirection and prompting defined by the flow requirements of the 3rd party systems.
  • Upon proper authorization, grab the basic data returned from the 3rd party and present that information to the backend system to allow it to use it for a customized user experience.

Implementation

A collection of 4 servlets, 2 of which are more core, and the other 2 slightly more optional (read: yet to be fully implemented).

  • LoginServlet - Triggers sending the browser to the official authentication page of the 3rd party system.
  • CallbackServlet - Location the browser will be sent to after the user has authenticated with the 3rd party. This is usually a registered address with the 3rd party system or passed to it as part of the initial login (automatic).
  • LogoutServlet - A way to remove the current user information from the browser session.
  • DisconnectServlet - A way to decouple the 3rd party system from the user. ie. Remove authorization to leverage that 3rd party platform.

Looking aside the last 2 for now (since their implementations are on the lesser scope of interest), the system really only requires 2 solid servlets to function.

Login

The com.subdigit.auth.servlet.LoginServlet instantiates the com.subdigit.auth.AuthenticationHelper class which is what actually handles the gritty details. I did this with the hope that I can easily decouple the system from a specific servlet framework instance. For now the AuthenticationHelper requires the servlet’s request and response to be passed in so that it has access to the necessary information, but I can see this evolving so that you can extract the information from any source into the AuthenticationHelper to have it behave accordingly with 0 requirement to be in a servlet environment at all.

The AuthenticationHelper is issued a authenticationHelper.connect() call to start the process. It figures out which 3rd party system you’re calling, finds the proper class that can handle that system’s requirements (which needs to extend the com.subdigit.auth.AuthentcationService interface), dynamically loads it, and calls that service’s authenticationService.connect() method. That method creates the URL to the 3rd party system with all the required parameters (like the application id and key) and redirects the browser to it to prompt the user to authenticate.

Callback

Once the user authenticates, the 3rd party system is told to callback to the com.subdigit.auth.servlet.CallbackServlet which instantiates another AuthenticationHelper that chains the same AuthenticationService to call the authenticationService.validate() method. At this point, the system usually has an abstract token for the user, which then needs to be looked up with the 3rd party system to get the relevant details (like name, email, etc). Each service varies how this is done, which is why we need a per-service class (See the com.subdigit.auth.service package) that can handle these issues per 3rd party you want to connect to the Authentication Module.

Once the user’s information is retrieved, the service then packages all the data it has into an com.subdigit.auth.AuthenticationResults object which is floated back up to the CallbackSservlet. At the CallbackServlet level, the developer can probe the AuthenticationResults object’s various methods to access the stored information about the user. This will then allow the developer to correlate the data in the AuthenticationResults object to an existing user in the local datastore or to create a new user.

Installation

Download the source from GitHub:

github clone https://github.com/subdigit/authentication.git

That should get you an authentication/ directory. You will need to copy:

src/main/resources/authenticationserviceconfiguration.properties.sample -> src/main/resources/authenticationserviceconfiguration.properties

And fill in the appropriate .appid and .appsecret values for the services you want to use. It’s a hassle, but you’ll need to figure out the developer pages for the services and do the whole registration of your app there. Some URL hints for that are on the project’s README.md file. Once inserted, make sure the .enabled is set to true for the ones you will use.

Test Instance

To run a test instance, be sure you have Maven installed and run:

mvn tomcat7:run

That should fire up a bare bones system on localhost:8080. The systems you’ve enabled should show up there and you should be able to click and log into the test page with them. Should :).

Integration

The servlets provided are really bare bone samples. Take the flow in each of the goGet/doPost of the servlets and implement your own desired customizations in them. You can probably copy the LoginServlet as is and write your own LogoutServlet for however you want to get the user logged out. The only one where you should modify the code would be in the CallbackServlet. Once the AuthenticationResults object is returned from the authenticationHelper.validate(), this would be where you need to probe the results to figure out who the user is and how they relate to your existing system (new user, existing user, etc). At this point you will need to decide where to redirect the user to and if a new account needs to be created and so on.

The authenticationResults.getServiceUserID() will return back the primary user identifier from the 3rd party system and authenticationResults.getService() will let you know the 3rd party service in question. That should be enough information to find that same user in your back end and load their information. If they are new, you can get back some information from authenticationResults.getVariable(<String>) or get back all the parsed data via HashMap<String,Object> authenticationResults.getDataStore(). And if you still need more information, you can get the actual object that was returned back from Object authenticationResults.getReturnData(). You will need to look into the code for each service to find out what type of object is being stored.

Questions

If you have any questions about how to get this working, feel free to leave them here or to just bug me on Google+. Twitter is ok too, but the 140 character limit will just end up being frustrating.

Oh, and be more than free to do what you want with the code, just that if you make any changes, share them!

yeah, it’s still got a while to go, but this can be a start.

authentication introduction

| Comments

tl;dr: Logins suck. I’m making it modular. Plug it in here: Authentication Module

Authentication

I come from a corporate world of internal application development where the concept of a single, unified, user identification is handled through the official company email and corresponding backend password validation system. So when I create an application, there’s no question or issue about how and what identity provisioning mechanism to use. The corporate email and password ruled everything.

But the real world is a different beast.


Reality

Out here, the use of a user email implies the need to register it, and therefore to create a unique password to associate with it, and I, as the application developer, now have to store that password. It would be great if everyone picked a unique password for each site so that if there is a breach and the password becomes known, all that has to be done is to nuke that password and no one can get into the site. However, that’s not how people work. Passwords get reused… a lot. One password to rule them all is all too common. Combine that with a known email, and that of course spells trouble.

But there are certainly ways to actually store passwords securely. You can hash and salt them to obfuscate the actual password and only care about what the password represents. There are also numerous services out there that allow you to leverage them in order to validate a user. You can do things like log in via Facebook, Google+, Twitter and even Mozilla’s Persona initiative.

All of these put the validation, verification and identity check of the user on other systems so that you can leverage their user validation infrastructure and just validate who is accessing your site without becoming the actual provisioner of the security credentials. As a developer, less is more in this case. Doing this completely removes the onus of password security storage back to the “other guys” and it’s one less thing to worry about. All I need to worry about is who the user is and keep track of that information in my locally.

Now, are there problems this sort offloading entail? Sure. Your potential users need to be a member of one of the services that you’re coding against (though something like Persona is trying to eliminate that as much as possible). It’s also hard to correlate one person’s fecebook account with their google+ account unless you get something definitive to connect them together (like the underlying email address or an explicit login to associate one with another). This may happen intentionally by people to create a new account, though that’s something already present with email based logins anyway so not much of a new issue. It should actually be less of an issue since you can hope it isn’t as easy to create duplicate accounts on these other system (but that would be a naive assumption).

What’s actually worse though is the unintentional aspect duplication: poor memory. Many a times I’ve forgotten if I’ve signed into a site via facebook, twitter, google+ or something else. So the developer will need to take that into account as well.

But despite that, I think removing the onus of password storage and security is a good thing. You know, just in case, not that anyone would want to breach a site I create :).

In the end, it’s more about simplicity. I just don’t want to hassle with it. The less I deal with sensitive material, the less likely it can be breached.

New Project

Hence this project: Authentication Module

The goal is to create an easily usable Java servlet base module that can be leveraged as the login mechanism for anyone developing a site. Well, for the time being, that anyone is me, and I just didn’t want to rewrite this part for each site I wanted to create. I’ll have a little more inner details about the project in the future, but for now, you should be able to run it via a git clone and a maven call.

The purpose of this module is to provide the developer with all the necessary code and routine to allow for the end user to log in via the provided 3rd party services and return back the identifying information for the user so that their information can then be stored in the application as the local user. No more having to deal with passwords, just a return of the critical information about the user the system needs to allow a customized experience.

What the developer does with that information is beyond the scope of this module (I’m planning on creating a new module for that), but this will be a good jumping off point to just get going quickly.

I hope it’ll be useful to people. You can check it out on GitHub and feel free to suggest any requests, fork or pull or spoon or whatever you like with it.

security and “doing things right” are always such tough issues. personally, it’s a thorn in my side, hence the need to modularlize and just make it easy and reusable

maven setup

| Comments

More: My Maven based project workflow: Maven Workflow

Maven

I only recently stumbled upon Maven after seeing how Ruby and node.js have their respective package management systems. Well, actually, I’ve been stumbling upon pom.xml files for a while now but it never really clicked with me what they were for. After a bit of exploring during a new project setup, I figured it’s time to include a bit more package management into my Java projects so that I can be a little more versed with how things are done in modern Java development.

So, enter the Maven setup guide.

I’m writing this as a companion piece to the Java Development post since the concept is going to be related. This guide will assume some setup from that post for the basic Eclipse bootstrap, and the rest will diverge from there to help you get up and running with Maven and Eclipse and a simple project setup. Or something like that. We’ll see how it fares by the end of the post.

Setup

So here’s the plan:

  • Get Maven installed on the OS and in Eclipse
  • Create a new Java project with Maven support
  • Setup the various configuration files
  • Integrate the project into Eclipse
  • Make sure the project can be run independent of Eclipse

Preparation

You’ll want to get Maven installed on your OS, since part of the point of Maven is to be able to compile and run things independent of a platform (like Eclipse). For something like Ubuntu, you can easily do:

sudo apt-get install maven

and that should do the trick. For other OSes, you’ll have to do your due diligence to get it installed :). For Eclipse, you’ll need these repositories. Though technically, I think you just need the second one:

“Maven” : http://download.eclipse.org/technology/m2e/releases/
“Maven Integration for WTP” : http://download.eclipse.org/m2e-wtp/releases/
[Source]

With that, your system should be ready for Maven use at the OS and Eclipse level.

Project Creation

This is where the bizarre Maven commands come into play. For my case, I develop web applications aimed to be run on servlets, so this is the Maven command to create a project as such:

mvn archetype:generate -DarchetypeArtifactId=maven-archetype-webapp

Just run that from your project parent directory and it will eventually create a project with the specified artifactId you will provide during the prompted walk through. In a nutshell:

  • groupId: Your Java package. For me, I use something like com.subdigit
  • artifactId: The application package. So I may do something like testapp if my project package is com.subdigit.testapp
  • version: I usually downgrade my version to something like ‘0.0.1-SNAPSHOT’ just for starters. Keep the -SNAPSHOT bit as it actually as semantic meaning.
  • package: should be fine to leave as is, as it will just mirror the groupId.

And that’s it, you should now have a “testapp” directory populated with an outdated servlet web.xml. Congrats. Which of course means…

Fix Defaults: web.xml

Why the default archetype for a webapp is not up to date with modern servlet standards, I don’t know. But it’s an easy enough fix.

You’ll want to update the src/main/webapp/web.xml from the dark ages to at least the current standard of the 3.0 framework. So, just replace everything in there with this:

<?xml version="1.0" encoding="UTF-8"?>
<web-app xmlns="http://java.sun.com/xml/ns/javaee"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
  version="3.0">
<display-name>User Module</display-name>
  <welcome-file-list>
    <welcome-file>index.html</welcome-file>
    <welcome-file>index.jsp</welcome-file>
  </welcome-file-list>
</web-app>

That should give you the basics to start with. And if you do it now, Eclipse will be much happier giving you the proper settings during the import.

Fix Defaults: pom.xml

For the most part, the pom file is ok. But it could use some version updating. The default junit version that’s hardcoded in is 3.8.1 so you might want to update it to something like 4.11.

You’ll also want to force Maven to use a modern JRE as the default is something rather ancient. To do so, you’ll need to explicitly specify the JRE version you want Maven to use in the pom file’s <build>/<plugins> section:

  <plugin>
    <groupId>org.apache.maven.plugins</groupId>
    <artifactId>maven-compiler-plugin</artifactId>
    <version>3.0</version>
    <configuration>
      <source>1.7</source>
      <target>1.7</target>
      <encoding>UTF-8</encoding>
    </configuration>
  </plugin>

The rest you get to have fun modifying it to your needs adding in all sorts of dependencies and whatever you need. I wont cover that here, but you can see the pom.xml I use in this project.

Eclipse Import

Once all the above is set, you can import it into Eclipse. Some sites will tell you to use “mvn eclipse:eclipse -Dwtpversion=2.0” from the commandline to prepare the project, but the instructions that worked for me didnt like it, so I don’t use it for my projects. All you should need to do is:

File -> Import -> Existing Maven Projects
[Source]

And just follow the prompts. If you already updated your web.xml per the above, the Dynamic Web Module version should already be at 3.0. Which is what you want.

Fix Defaults: Eclipse

You’ll most likely have some of the directories you need, but not all of them. Odd isn’t it. Just go ahead and make sure you have at least these 4 directories:

  • /src/main/java: Java files go here
  • /src/main/resources: Property files go here
  • /src/main/webapp: jsps go here
  • /src/test/java: Unit tests go here

If not, you can just right click on the project and do a “New -> Folder” and create the missing ones. Again, why they’re not already all there, I have no clue.

Update Settings

One thing that didn’t catch for me at times were my Maven dependencies. Basically, with Maven, you’ll no longer be directly hosting the various dependency jars in “WEB-INF/libs”, instead they all go into a central repo on your machine that Eclipse needs to reference in its classpath. So you need to make sure your “Maven Dependencies” are linked properly, or else you’ll end up with a bunch of “ClassNotFound” exceptions.

Open up your project properties, and find “Deployment Assembly”. In that, there should be:

"Maven Dependencies" -> WEB-INF/lib
[Source]

If it’s not there, add it. If it’s there, golden. Just to make sure, you should also have your main java and resources directories pointing to “WEB-INF/classes” and the webapp directory pointing to “/”.

Eclipse Independence

At this point, you should be able to create your servlet, configure the web.xml or use the @WebServlet annotation and have it get registered properly (assuming you went through and configured a servlet container like Tomcat). You’re all golden.

Now we need to take advantage of the whole package and independence Maven is supposed to give you.

Personally, I never want to have to muddy my development environment just to test out a cool project I’ve see somewhere or to run the samples of a project without the hassles of setup. I love seeing all these node.js projects just be able to run everything from a lightweight commandline instance so that you don’t have to do any setup. Maven also lets you do the same.

For me, I’m currently using Tomcat, so I would love to just have Maven be able to run my project from the commandline so that people can easily test it out without hassle. To make that happen, you need to make sure you have a dependency in your pom.xml that points to the servlet API so that Maven can compile against it, and then you also need to have it call a server to run the compiled code on a localhost port.

So, for the servlet API dependency, make sure you have this as part of your pom’s <dependencies> section:

  <dependency>
    <groupId>javax.servlet</groupId>
    <artifactId>javax.servlet-api</artifactId>
    <version>3.0.1</version>
    <scope>provided</scope>
  </dependency>

And in your <build> section’s <plugins> area, you’ll need the Tomcat plugin installed:

  <plugin>
    <groupId>org.apache.tomcat.maven</groupId>
    <artifactId>tomcat7-maven-plugin</artifactId>
    <version>2.0</version>
    <configuration>
      <url>http://localhost:8080/manager</url>
    </configuration>
  </plugin>

Once that’s in place, you can then call:

mvn tomcat7:run

From the commandline and voila, you should now see your app running on http://localhost:8080 Congratulations. You’ve now achieved IDE independence.

So in theory, if you put your code up on GitHub, tell people to clone it locally, all they need to do to run it is to type the above. No setup, no IDE needed, no hassles. Quick and simple. You can try it with my project and see if it does indeed work.

Notes

There’s a lot I’m leaving out about Maven and pom.xml setup, but I’m just not versed enough to write about it, nor if I did, would this post be readable in length. It’s already pretty long and talking about the nuances of the pom.xml is just way beyond this post. Best bets: Go search for pom.xml files in GitHub and use them as examples. I’m sure there will be a few referential ones around :).

Next

I think I have most of my setup complete, so I want to start concentrating on creating some projects. You can get a peek at what I’m up to on my GitHub account if you’re curious. It’s going to be a series of modules that offer some good reuse and bootstrapping for starting a user login capable website… or so I think.

setup always takes so long to do. I hope this speeds it up for some people. At least it does for me. Also, maven image sourced from ideyatech

google+ shift

| Comments

tl;dr: Fix communities by allowing simultaneous share to public and filtering circles by communities to create a semantically relevant data pipe for Google’s knowledge graph. Or something like that. Just read it :).

So Google+ is an interesting beast. It’s currently my network of choice simply because there’s content here. I know, it’s supposed to be ghost town, but I’m just going to ignore that for another discussion. The bottom line for Google+ is that if you’re interested in having actual conversations with people about various topics, whether it be technical or social, this is a great place to be… provided, just like any other network, you spend a little time to find the right people, pages, and now, communities.

But this isn’t going to be a general post about Google+. That’s just covering way too much ground.

I want to focus on a particular shift I’ve noticed for Google+: We’re slowly moving away from a people centric system to a topic centric one.

People

Just to get this out of the way, we still follow people on Google+ because they’re the ones that actually post things that interest us. Just like we do in a variety of other networks and systems. But the nice thing about Google+ is that it’s also very topic oriented. And this can actually be a foreign thing for a lot of people, especially those coming from Facebook. That network is much more tightly woven together by people connections (family, friends IRL, etc). Hence they just can’t seem to leave it since it means leaving actual people behind and having to change the paradigm of operation when entering Google+.

Twitter users sort of have the same problem. Though freer than Facebook in terms of who you follow, you’re still basically following people. You may join a hashtag conversation once in a while, but in the end, you can only do so much to continue to be with people in the same mindset. Twitter an awesome system for quick, short, shoutouts, events, exchanges, promotions, and blurbs, but [for me at least] it’s an absolutely horrendous platform to actually converse on. Nothing is threaded, near impossible to see anyone else’s replies, the reply-to-id is woefully underused. No wonder they’re trying to ban a lot of 3rd party clients to try to enforce a common UX across everything. But I’m derailing, that’s a topic for another time.

Google+ started itself off as a happy medium between traditional threaded conversations [like in forums] and the faster paced people oriented messaging [like in Twitter]. Coming in late, it had the time and luxury to see what worked in Facebook (connections with people are important) and what worked in Twitter (fast updates), and probably more importantly, what was missing (poor conversation tracking, lack of privacy, lack of easily expanding beyond your “[1000’s of] friends”).

Topics

Now as it establishes itself (#2 social network), the next step is to try to push the system slightly away from people orientation to topical orientation. People are important still, but it’s the topics they write about that are what keep the people here. And Communities fills that niche very nicely.

It helps to centralize what would have otherwise been produced in separate stream by separate people amidst all the other topics people write about. With the introduction of Communities, people can now post their relevant content to a specific location and aggregate everything in one place. This helps to make sure if a topic is hot, or a community is populated well, you can always find and converse with people of like interests. That’s incredibly important for user retention. People need to feel like they belong, and now, even the newest of users can simply just jump into a community and be instantly connected.

Brilliant.

But… they got it wrong.

Not so fast

People are not topics. They still have a need to be known as an individual. From a Google perspective, I think it’s easy to harvest our output into topics for others to consume and hopefully reciprocate by contributing back. After all, what better way to extend the knowledge graph than to have great semantic data about what we post. And it is a great benefit for all the users. But we, as producers, still need our identity.

And our identity is defined by the output we produce, but the implementation of Communities fragments our outputs. We’re now forced to choose if we should post content to our main stream, or to post content to a community. I know that we can decide if we want our community posts to appear on our profile page, but quite honestly, that’s a secondary place people go to for content. The whole point of circles was to group people and listen to those circles. And with the current setup, your community posts do not show up on your main stream.

So you have to choose. Do you post to a community? Do you post to your main stream? Do you double post? It’s choice we shouldn’t have to make at all. And double posting just to make the post appear in your public stream is just a wrong waste of my time and effort.

Fix it

Easy fix? Allow us to post to a community and to our Public stream simultaneously in one posting action. I know the restrictions are in place to make sure the content publicity level and the community publicity level are accounted for, but that restriction was the easy way out (my guess is that a more robust method is being worked on, but it’s not a trivial issue). But there’s no reason to have that restriction in place for a fully public community. So perhaps the quick solution is to allow a “Public” share as well as a Community posting for those communities that are fully public.

I know there will be some strange UX issues, but quite frankly, this time, the benefits really out weighs the UX detriments of creating a special paradigm just for this one use case.

But you know what’s better than an easy fix? A proper fix. Ready? Take notes.

Go Multidimensional

People are not one dimensional as circles imply. I don’t fit into just one circle, I probably fit into several circles. So it’s of no use to have a Circle called “Java” and putting me in there expecting I just posts about Java since surely, as the internet is the internet, I’m going to post some Cat pictures. So with that one fell swoop, I’ve made your topical circle cluttered with junk and conceptually worthless.

You could follow a community instead, but all you’re given is an “Alert On/Off” system, which floods the crap out of your stream if I leave it on for large communities. So now, you can only either get alerted for everything or you have to remember to manually go to the community. Great, in that case communities are only theoretically useful for filtering.

We need to merge this concept of circles to follow people, and communities to follow topics. You need to be able to listen to what I post when I post to a specific community. In order for that to happen, I need to be able to make one post that works as a post to the community and a post to my public stream. This type of post now essentially semantically tags my post with a topic, ie. a community. Now all my posts are going to be semantically tagged. Isn’t that a dandy? And we didn’t even have to teach people to have to use hashtags or any gimmick, all part of the UX.

Awesomesauce.

Once my posts are tagged with a related community, let’s take advantage of that. Allow us to filter a circle based on a community (or even communities). Now you can put me in a circle called “Java”, filter it against the Java Community, and only those post I create that are also posted to the Java Community will show up in the circle stream. [Un]fortunately, you now no longer have to ever see my Cat posts (well, that is unless you put me in a circle associated with the Cat Pictures Community).

We’ve now taken the first step to balance the idea of people and topics into one UX friendly concept.

Progress

I’m now more willing to post to a community exclusively since I know if people want to just follow me as a person, my main stream has all my content. And for those that just want to follow one specific aspect of me, they can filter me by the communities I post to. People will be more willing to circle me, I’ll feel the love of gaining readers, and that will encourage me to post more, it’ll encourage others to just get what they want and therefore interact more, it’s a win-win for both sides.

Well, for Google also since interactions go up, and we’ve most importantly started to add genuine semantic data to our posting that can be programmatically mined instead of inferred via context and content.

Communities now are more useful. Circles are much more useful. Following people is much more comfortable. Less crud in my stream from people posting things I don’t want, but can still see the posts I do want from those same people. Egos can be satisfied. And we’re contributing to the I’m-sure-it’s-coming-soon Google knowledge graph search system. people’s egos will be satisfied.

It feels like a win-win for both sides, topical and people oriented, for Google and the users of Google+.

So, make it happen. Pretty please?

Well, this got a bit long. I can easily see what is out there now as the “quick” thing to put out. But there are also some directions the network is taking that may indicate that I’m not quite on the same page as the developers. Will need to dig deeper and see what’s actually there :).

java development

| Comments

Java

Look, I know there’s all these new paradigms out there for programming applications that provide useful features and new technologies, but sometimes you still want to return to your roots and what you’re used to, just to see what it’s like.

I’ve been a Java programmer for a good part of a decade and a half and a lot has changed in that time. I’ve been a bit stagnant in taking advantage of the new areas of Java, so I thought it would be time to get a fresh setup up and running so that I can tinker and take a look to see if Java can indeed compete with the rest of the pack. But first, I need to get setup…

Eclipse

Love it or hate it, it’s a pretty robust platform that cant be ignored. I lean more towards the love it side, but it may be simply because I haven’t really used many other integrated IDEs for Java development. Once you have the servers hooked in, I think Eclipse can offer a good experience in terms of “just getting things done”. But the one warning is that you really want to have enough RAM… I mean lots. I’m comfortable with the 16GB I have, but I can see why people feel like it’s bloated (it is) when you only have 2GB of RAM to share with other apps.

Oh, and sometimes, it can feel downright ancient, but you can easily work around that… I hope.

Setup

I’m currently on Ubuntu 12.10, and the default Eclipse is 3.8. I no longer know if this is really considered Indigo or Juno, but all I know is that the newer 4.2 branch is horribly unstable and prone to a lot of crashes on Ubuntu. I tried it, but had to revert back to the default, which I guess is the default for a reason.

Anyway, getting it installed and running is pretty much as easy as visiting the software center. The fun starts when trying to get all the plugins and dependencies resolved. For this round, here’s my goal, installed specifically in this order since I had trouble when things got out of order:

  • Install the Google Plugin for Android development and GWT (just to play around)
  • Install GlassFish
  • Install the “Web, XML, Java EE and OSGi Enterprise Development” tools
  • Install Tomcat’s TomEE variation
  • Install EclipseLink and JPA
  • Install Maven plugin
  • Install Git plugin

Now just getting them installed is a bit different from actually using them, but I just wanted to have some options setup now so that I can just try these out when I’m ready to play with them. So, onto the installation woes.

All these are installed via “Help -> Install New Software… : Add…” by adding a new plugin repository. You can actually add them all first then do each installation if you find that easier.

Preparation: Eclipse Update Site

Site: Eclipse Project Update Sites
Repository: “Indigo Update Site” : http://download.eclipse.org/releases/indigo/

For some strange reason, mine was missing the update side. So I added it in. Not sure if it was missing since Ubuntu is supposed to handle the updates or not, but I thought it’ll be good to have it there in case I wanted to browse it to add some extra plugins (like some of the ones below).

What tricked me up was that you can actually quickly see all the sites that are available to the Eclipse updater by using the dropdown arrow next to the filter box. For some reason I wasn’t seeing that and therefore wasn’t seeing the list of installable software populated. It would be nice to have it show everything by default, but understandably, they didn’t want to have an initial network hit fetching all that data, especially if you have a lot of sites.

First: Google Plugin

Site: Google Plugin
Repository: “Google Plugin” : http://dl.google.com/eclipse/plugin/4.2

This will install things like the ADT Plugin for Android development (you still need to have the SDK installed, which you can find instructions for here). It also adds support to be able to play with GWT which is a nice addition. Not sure if I will, but it’s on the list of things I want to take a look at.

After a bit of experimenting, this plugin has to go first. If I install it later, it complained about some ant dependencies that just wouldn’t resolve themselves. The joy of dependency hell. But install it first and no problems.

The plugin comes in a 3.7 flavor also, but use the 4.2 variety. I think 3.8 is really almost the same as 4.2, just without the UI makeover.

Second: Glassfish

Site: GlassFish
Repository: “GlassFish” : http://download.java.net/glassfish/eclipse/indigo

GlassFish is basically the reference Java EE server produced by Oracle to support and showcase the standard. I figured it should be a good one to have and use, especially since it came as a complete package with the Apache Derby DB. Why not.

Once you load up the repository, you should see a “GlassFish Application Server” entry. Go through all the hoops, restart Eclipse, and end up back at the Install New Software section. I figured it’s pretty heavy duty and the dependency chain it installs will get a lot I’ll need later on installed.

Note: I’ve had the repository fail on me from time to time, so alternatively, you can use the Eclipse Marketplace to get Glassfish installed. There’s a good instructions set on Tech Juice.

Third: Web/XML/Java EE Tools

Repository: “Indigo Update Site” : http://download.eclipse.org/releases/indigo/

I also loaded from the Indigo Update Site: “Web, XML, Java EE and OSGi Enterprise Development” tools. But without the “PHP Development Tools (PDT) SDK Feature” and “Rich Ajax Platform (RAP) Tooling”. Not 100% sure why, but just followed the instructions from a Tomcat installation post. I figured they know what they’re doing and I’m not a PHP person anyway so no loss there.

Fourth: TomEE

Site: TomEE [Tomcat]

Like GlassFish, TomEE is a full service stack. It’s basically Tomcat with the added modules to make it into a full Java EE stack.

This one you need to go download and install manually. We’re simply going to associate Eclipse with the freshly installed server. So, go download TomEE from Apache site (I got the Plus version). Then just “tar xfvz” it somewhere you like (I put it in /opt). It will need to be owned by the same user that will be initiating Eclipse in order to allow for file changes, so I just “chmod -R” to my user and group.

The instructions are nicely laid out on the site. If you already have a project, you can use the “Quick Start”, else it’s not too much of a hassle to do the “Advanced installation”. The only difference for me was that I couldn’t find the “Modules auto reload by default checkbox”. I changed the ports to +2 everything since I’m letting GlassFish run on port 8081 (a +1 from the default 8080). That way it wont interfere with my system’s default web server.

I opted to let Eclipse manage this instance of Tomcat so I set the preference to “Use Tomcat installation”.

When doing the server association, you can actually tell the system do go download Tomcat for you. I guess that way you can have a completely managed Tomcat instance via Eclipse, and all you would need to do is to add the additional libs/wars to turn it into a TomEE instance. I just opted to install my own separate instance and hook it in, but let me know if you’ve done otherwise.

Fourth.5: GlassFish configuration change

Just a note. I hate crappy directory layouts. And GlassFish creates a top level directory in your workspace for its server. However, once you install TomEE, it puts it’s server information in a “Servers” directory, like it should. So to fix this, I do a little cleanup to move the GlassFish server directory to the proper Server directory.

To do this, Open up the “Server” view, which should be showing a GlassFish server. Double click it and find the Domain Directory entry. Before hitting “Browse”, I copied the “glassfishdefaultserverlocalwhateverthislonguglynameis” directory to something like “Servers/GlassFish-3.1.2.2”. Once I have that, I hit “Browse” and simply selected that new directory and deleted the old one. All nice and clean.

Configuration Note: For some reason, I cant unlock the server port to edit it via the provided interface in Eclipse (when you double click on the GlassFish server). But you can do it manually by going to the GlassFish server directory (the one I changed to “Servers/GlassFish-3.1.2.2”) and editing the config/domain.xml file and finding the entry for the One note, anyone know how to change the default port on GlassFish? The Server and Admin Server Port entries are locked for me… /config/domain.xml change:

<network-listener port="8080" protocol="http-listener-1" transport="tcp" name="http-listener-1" thread-pool="http-thread-pool"></network-listener>

to

<network-listener port="8081" protocol="http-listener-1" transport="tcp" name="http-listener-1" thread-pool="http-thread-pool"></network-listener>

And that should do it. If you want to also run a stand alone version of GlassFish, you should probably go ahead and +1 all the other <network-listener> ports so that there wont be any conflicts.

Fifth: EclipseLink and JPA

Site: EclipseLink
Repository: “EclipseLink” : http://download.eclipse.org/rt/eclipselink/updates/

EclipseLink allows you to easily use the Java Persistence APIs (JPA) to connect and pass objects to and from a data store. Well, so they say. I have yet to try it so we’ll see how easy it really is :).

Sixth: Maven

Site: Maven
Repository: “Maven” : http://download.eclipse.org/technology/m2e/releases/
Repository: “Maven Integration for WTP” : http://download.eclipse.org/m2e-wtp/releases/

Maven’s the super ugly cousin of npm, gem, other modern dependency package management system. But I guess it works once you get the hang of it. I have yet to get the hang of it, but would like to, hence I’m installing it.

You can find more detailed installation and project setup instructions on this companion post: Maven Setup.

Seventh: Git

Site: eGit
Repository: “Git” : http://download.eclipse.org/egit/updates

And Git, or eGit, jGit, whatever. It’s basically git. Everyone seems to love git (and thus GitHub). I’m learning it. So I may as well jump on the band wagon.

Warning: Of course, the natural progression is to want to install the GitHub plugin. YMMV, but I couldn’t get it to work. In fact, it bricks my entire Eclipse installation and I had to clean out and reset my Eclipse back to the default and redo all the steps. I have no clue what went wrong, why, or how to fix it, but it’s borked me enough times to stay away from it for now.

At first when I tried to install the GitHub plugin (Site, Repository), it complained that I needed Mylyn installed. So I tried to install Mylyn (Site, Repository), but it complained that I needed Subclipse installed. So I installed Subclipse (Site, Repository), then Mylyn, then GitHub, then Eclipse would no longer restart. Had to nuke it all.

But do let me know if you can get it working…

Cleanup

So, speaking of nuking, sometimes, something will just go wrong and you need a fresh start. I’m not quite sure what to do about projects that got configured with important information, but I had a clean slate from the beginning, so I didn’t have to worry about relinking projects. If you need a clean Eclipse slate, because something like GitHub borked your installation, here’s some of the directories to clear out:

  • Remove the version of eclipse that you are using from ~/.eclipse
  • Remove the /.metadata directory.

Pretty easy, but again, be warned, if you already have established projects and you do this, the meta information about your project will be lost and you’ll have to re-import them back into Eclipse. I was on a clean slate so not too many worries for me.

Startup Errors

I’ve noticed that at times I’ll get a message during startup complaining “An error has occurred. See the log for more details”. And that’s it. No more eclipse. The log is equally unhelpful complaining about some sort of OutOfMemory error.

I found this article which alleviated the problem by calling eclipse with the -clean flag:

eclipse -clean

Which seems to allow start up of Eclipse with no detrimental effects. No clue why, and neither did the author of the post know. But so long as it works, may as well leave that tidbit here.

Notes

Even though both GlassFish and TomEE are installed and runnable simultaneously, unfortunately you have to associate your project during creation time to one or the other. Which means you cant just plop a Glassfish project into TomEE and visa versa. But perhaps with some manual XML hacking, it can be switchable once you create it. I just haven’t gotten around to that level yet, and probably wont. But it’s good to have either available to play around with and see what’s going on.

Next

Now that I’m all Java setup, I’ll probably start taking a look at some of the newer frameworks and see what they offer. Things like Spark, Britesnow, Play, Vaadin, Jetspeed, Jersey and even older ones like Hibernate and Spring… maybe. In the end, I want to see if modern Java development is up to snuff to compete with the newer frameworks.

As a preview, my next task would be to get a bit more of an understanding of JPA, which I’ll be experimenting with via this set of instructions. But experimenting early on, I’ve already run into a problem, so hopefully I can get around it (and I’m sure others) after a deeper dive.

Sometimes I cant tell Java is really the standout in terms of diversity and functionality or if I’m just used to it. Everything else feels pretty lacking in structure and organization. But I’ll see if there’s something out there that can compare. It is bloated, but there’s quite a lot out there for pretty much anything you want to do.

extractor development diary

| Comments

Development, as straightforward as it may seem, is never as straightforward as you would like it. Often times, you have an idea in your head. You know where you want to go. You even have the general gist of how to get from point A to point B (if you’re lucky). But the road to get there is never quite as straight as planned. The URL Stats Extractor was definitely one of those experiences.

Chrome Extension

So, I wanted to do a simple chrome extension that showed me the stats for goo.gl based shortened URLs. Mainly for my own want in keeping track of links I’ve created, but also to see how popular other links can be. Plus the main http://goo.gl site is, how can I say it nicely, “lacking” in any modern website features. It’s pretty crappy and you cant do a search nor reorder the links nor even actually delete anything, just hide it. But, it does have a robust API.

Almost forgivable.

I chose a Chrome extension simply because that’s my current browser of choice. It’s robust, the tools are there, and lots of good guides. So why not.

The Idea

Back to the extension. The plan was to create an extension that would find all the shortened URLs on a page and give you back some information about them. I didn’t want to permanently visually disrupt the page since there’s a lot of sensitivity with layouts and structure and widths and heights and so. So I opted to go with a popup. The plan was on a mouseover, the popup would trigger, hit the API, fetch the information, render the data, and show you a nice bubble. Simple. Yes… simple.

Preparation

First things first. I needed to remind myself how to develop an extension. I often find examples to be invaluable to get a kickstart in the right direction.

Of course, I needed the reference for the goo.gl API and do all the signup needed to get developer keys and so on. Pretty easy. Wasn’t even tedious.

I chose to just shove jQuery into the extension so that I can leverage it for my DOM manipulation and that would then allow me to plugin the various popup libraries people write against jQuery, which I was hoping would help move things along. More on that in a bit.

I’ll mention this now even though I ended up retrofitting this after I wrote the extension, but Alex Wolkov has a nice site called Extensionizr (found via this thread). It’s a quickstart way to get a Chrome extension bootstrap setup and configured nicely, especially for future expansion.

Research

Two simple things left to figure out:

  • Find out how to embed a mouseover trigger where I need.
  • Find a nice popup library.

Mouseover

For the mouseover, I could either find all the anchors and see if the URL is one I can analyze, or I could continuously figure out where the mouse is, find the word under there, and determine if that’s a URL I can show stats for in that spot. The first method will find all the hyperlinks on the page and allow me to analyze them so that I can embed trigger points. But it will miss non hyperlinked URLs that appear in plain text. The second method can be made to work with any kind of text, but the complexity required to figure out the underlying code and to see if an anchor surrounds it all with a suitable link was a bit daunting.

For this project, the first method was fine. Something from stackoverflow pointed me in the right direction. I ended up simply finding all the <a> tags, grabbing the href from them, and analyzing it to see if it’s something the extension understands. If it did, I would embed a class on that anchor, which I would later use as the trigger point for the popup.

I only mention the second method since something like that could be really interesting to use for some other word-on-a-page analysis type thing. I started down the road of dissecting the Rikai-kun extension [sourc code, but it was actually far more complex than I imagined so I put that on the back burner. It was overly complex for what I needed for this extension anyway, and it actually didn’t quite solve the problem nicely, so tucked away for a future project.

Popup

There’s dime a dozen of these out there. So it shouldn’t have been hard to find one that works right? right? Turns out, it’s was a touch difficult.

Akita

I started out by asking for any recommendations and came back with something from Paul Yaun call Akita.

It worked fine with a few tweaks. Namely I had to catch the hover event myself then force open and close the tooltip based on entering and exiting the hover.

$('a.' + MARKER_CLASS).hover(
    function() { triggerExpand($(this)); },
    function() { clearExpand($(this)); }
)

I then triggered the popup in triggerExpand:

var speechBubble = $.akita.show({element: $(link)[0], content: output});

After that I went and ajax fetched the dynamic content. Just like planned… except it didn’t work inside of Google+, which is my primary target since the goo.gl shortener is used well in there (well, at least by me). Normally these extensions are usually run in a clean environment that you can control when building, you can manually resolve CSS conflicts and issues by adjusting your styles in interfering classes. But seeing that an extension is an injection, that would mean I’d have to analyze how and what Google+ was doing that was interfering with the extension. As fun as that could be (sarcasm?), I figured these popup libraries are dime a dozen, so time to move to the next one.

Tiptip

Up next was tiptip. Although there hasn’t been a new version since 2010, it seemed stable and light enough. I plugged it into my test env, and it worked fine. I even plugged it into the extension, and it actually worked on Google+, awesome.

But… it didn’t have the ability to fetch dynamic content… which was a deal breaker.

However, in the forum for the plugin, someone provided a patch that allowed the passed in content variable to either be a string or a function. Which meant that I can now pass in dynamic content via a delayed function call. Yay.

I applied the patch and now I could generate my own content on the fly. And I could now place a “fetching…” wait marker as the dynamic content got loaded.

Almost there. The next hurdle was with acquiring the context of the anchor point. Since I hacked on an async call to get data, once that data returned, it needed to know the context of the location, which is something the original plugin didnt really need. The patch didn’t take that into account, but it was easy enough to muck with the source to allow it to pass in the reference to the original element (already in a variable called orig_elem).

Ok, more problems, but more solutions.

That actually worked fine. The tooltip worked and showed the content properly, except that the default location was on bottom. A browse of the config for tiptip and got it showing on the top… then things broke… of course.

The triangle anchoring the content to the anchor element was shoved in the middle of the multilined popup. Ugh. So this popup only works well if the anchor was on the bottom. Great. Nix that. Onto the next.

Bootstrap

Fun. So I figured, hey, maybe a well established system like Bootstrap could be the solution.

Awesome, this should be good. Docs looked fine. Supported dynamic content right off the bat (though wasn’t sure about the proper reference passing). However, it failed the Google+ injection test (seriously Google+, what the hell is in your css/javascript mojo for the site…). It would only do an awkward partial render, obviously the elements of Google+ interfering. I tried both tooltip and popover, and neither went well.

Back to Tiptip

So back to Tiptip. I guess a bottom tooltip will be good enough for now. I was getting tired of over exploring and just wanted to see something working that I could actually use.

As I was writing the notes for this post, it became clear that the location of the anchor bit was actually correct, but the content of the tooltip had expanded too low due to the addition of the new dynamic content, which was a multiline thing as compared to the initial placeholder text (“fetching…”) on which all the calculations were made.

The top of the bubble was calculated based on the initial content, not the new content. Sigh.

All these weird little issues may explain why the async patch pushed to the git repo of tiptip never got integrated because the rendering mechanism just wasn’t compatible with dynamic content.

So the question now was to see if I could make the container re-render upon return of the new content, or if I could fix the calculations to make them truly dynamic, or if I should switch to syncronous ajax calls and return the full data right then and there so that the system can get that back right away.

For now, I’m switching from $jQuery.get to $jQuery.ajax and passing in an “async: false” flag to force the fetch to complete first. I figured the call should be pretty quick so not much lockup to worry about.

That finally did it. All the parts working well enough to warrant actual use. Not pretty, but working at least.

Presenting, the URL Stats Extractor. Complete, with GitHub source.

Todo

Things to do? Import in the pretty graphs from the goo.gl page as part of the display (or just a new display interpretation other than just text). And I need to dynamically detect how much space is available above the tooltip and switch it from top to bottom as needed. Though I think that should really be in the realm of the plugin code so I shouldn’t have to muck with it. That and perhaps exploring the tooltip landscape a bit more and finding something I can use without any modifications.

Also I need to figure out how to detect when new content gets loaded onto a page, like in the case of Google+, so that the extension can rescan for links and do the proper insertions.

Oh, and the extension is using the new 2.0 manifest version, so there’s a Content Security Policy that’s being reported about a base64 image being loaded… except that I don’t use base64 images and I can’t seem to find what base64 image is being loaded and what’s loading it… I’m suspecting it’s somewhere in the jQuery code, but I just cant seem to figure it out. For now it’s not a show stopper, just an annoyance present in the console, so not too bad. But I’d like to see it go away for good.

Eventually, I could probably extend this to include bit.ly, is.gd or whatever other shorteners are out there, but perhaps that could be something anyone willing enough can pull from the git repository and contribute :)

this is one of those projects where you did it quickly enough over a few days to get it functional 80% of the way, not sure if I have the dedicated energy to take it the rest of the 20%.

ruby ssl

| Comments

ngmodules

In my tech explorations, I had stumbled across this AngularJS Modules collection project which I thought was a neat project. Written in Ruby, it’s basically a simple way for people to add their AngularJS modules and to list and search for useful modules. I am by far not at all versed in Ruby, which made this something I was not familiar with. But thought it would be an interesting exercise get this running locally and to take a peek under the hood of a simple Ruby project that presented the whole stack.

The site came with a pointer to its GitHub repo, complete with a Readme.md outlining the steps needed to get it running locally. Well documented and clear instructions, what’s not to love about that.

Installation

Scanning the list of requirements, there were things I didn’t yet have installed on Ubuntu 12.10, so off I went. I had to install postgresql (sudo apt-get install postgresql) and pygments (sudo apt-get install python-pygments) but since I already had ruby installed (due to octopress), the rest of the instructions worked out well.

With everything installed and built, all that was left was to hit the big old button (or rather command line: bundle exec rails server) and see it in action. For once, it amazingly worked right out of the box. I now had a nice clone running on localhost:3000. All happiness and fun times… until I hit the “Login with GitHub” button.

It almost worked.

Problems

But as we all know, almost isn’t quite good enough. I was greeted with an SSL error:

Faraday::Error::ConnectionFailed
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed

A security issue, oh fun.

Solutions

I had to do a bit of digging to find out that it was some sort of Ruby configuration issue with SSL and localhost and forced certificate checking. After a bit of searching and probing for answers, I was led to this answer on StackOverflow: http://stackoverflow.com/a/11041204

Though another answer that had worked in the interim was to use prepend the Ruby command with the SSL_CERT_FILE environment variable and have that point to the downloaded cacert.pem file, wherever you put it. Something like this:

SSL_CERT_FILE=/etc/ssl/certs/cacert.pem bundle exec rails server

Useful if you can’t reinstall and rebuild Ruby for some reason. With that in place, the callback from GitHub worked fine and I was able to log in with my credentials. The last hurdle was crossed and I finally had a working clone of the project locally. Now I’ll can take a little bit of time and explore the code and see how it was built.

Next

I’m still deciding if Ruby is something I want to jump into and learn or something to bypass in favor of writing backends in Javascript for Node.js or to simply see if Java has made any packaged advances in this area (without the daunting complexity of WS-* services and the usual overblown nature of Java).

But for those that are in Ruby land, I figured others may have had the same issue so I’d post it here in case. Ah, the internet, such a nice place to hang out.

I couldn’t decide if this post should be part of the Ubuntu 12.10 quirks post or something to stand on its own. It ended up being pretty long, so I gave it its own post.

technology exploration

| Comments

Exploration

Starting a new project is daunting, especially when you only have a general idea about what you want to do. But it’s also a great time to explore and check out the new toys out there.

I’m predominantly a Java programmer. So I’ve been accustomed to working through my typical stack of tools to get things done. A nice relational database, some ORM layer to access the database, a servlet layer, a web layer to produce APIs and web compatible objects, and then all the front end magic to put it together for the end user.

It’s been all nice and dandy, but at some point, you need to sit back and wonder if you’re still on the right track… or rather, the most efficient track not just for development, but also just for the sake of keeping your toolset sharp and being cognizant of the new ways of doing things out there.

So, I was thinking about taking a bit of a break from the traditional way I do things to see what I can explore out there to help me learn something new and interesting. Something to get me nicely ramped up and eliminate all those bottlenecks I used to run into. It couldn’t hurt to look right? Just to see what new things are available that could help me out?

It turned out to be a far bigger, faster, more complex world out there than I thought.

Findings

Databasing

There’s the whole NoSQL movement, which I’m still convinced is more akin to “T-don’t-like-SQL-so-I-want-some-other-query-system” rather than a genuine improvement. The plus side is that these new tools have been built in the age of web applications and public APIs so usually everything is geared and ready to just be jsonized from the start. Some popular databases here are things like mongoDB and CouchDB, with which you can apparently pair PouchDB for offline databasing. Interesting.

Server side

There’s the whole Node.js movement, which is nifty, albeit having seen where Javascript came from, makes me cringe to think how core of a language JS is becoming even on the server side. But apparently performance of node and its plugins can be tuned to achieve something on par to a traditional Java stack. That’s at least a good start.

Though that’s not saying too much since the Java stack was never the fastest horse in the stable, it was just the most robust one. The other neat thing about Node is that there’s a robust plugin environment for it. Lots of extra tools and tidbits to add prebuilt layers and reduce complexity. End point and service generation plugins like Express and an interesting upcoming one called Hapi, built by one of the OAuth authors, Eram Hammer sound pretty interesting.

Packaging systems

Deployment, packaging, preparation, compression, parsing, aggregation and all that complex/tedious but necessary stuff. Things like npm and grunt help build and put things together. There’s the whole standalone Bower package repository that you can pull from and keep your libraries up to date. Lots of tools and utilities out there to help put things together and get them to places like Heroku and AppEngine.

Client side

Lots… and I mean LOTS of stuff here. Everything from the usual jQuery for easy DOM and Javascript manipulation and utility toolkits like underscore.js to nice templating and backend hookup systems like AngularJS and backbone.js. And of course, the now ubiquitous CSS bootstrapper Bootstrap for easy visual uniformity.

Generators

And to put it all together, there’s a boom in helping create and bootstrap your entire development stack. Projects like Yeoman and Meteor (my take is here) give you the full client side stack ready and prepared, as well as some work being done to do the same for the server side. Looks like a promising and interesting way to just get up and running fast without having to muck with all the setup work. Plus, usually included are a testing infrastructure or two like Testacular, but there’s also things like Mocha and Swarm.

So much help

There’s a slew of videos, examples, documentations, forks, samples, convenient generators for just about anything. If you don’t want to craft something on your own, just do a search for a generator of some sort and there’s probably going to be a site dedicated to it. Video tutorials can show you how code is crafted, and sometimes what’s more interesting, see the IDE environment some of these coders use.

And more…

And all the even newer cutting edge 0.0.1-alpha projects in the works. Bleeding edge concepts pushing boundaries and testing waters.

Oh yeah, apps!

Which is a whole different beast. Written in ObjectiveC, Java, HTML5, with native support, ubiquitous support, responsive, catered and whatnot. Various platforms like iOS and Android and their associated tutorials and development environments. Don’t forget ChromeOS and its set of Chrome related apps, extensions and themes too. The application stores themselves like Apple’s App Store or Google Play and the Chrome Web Store all offer a variety of places to show your app to users.

And so much more…

There’s a lot… I mean, a lot out there. And I’m finding more and more everyday, with every search, with every tangential link and every posting I run into. Plus they’re all evolving and changing and improving.

I’ll probably be posting about some of my discoveries on a day to day basis via my Google+ account, so feel free to follow me there.

I hope I can keep up…

Further

Sharing some additional links with more information that I found useful. Feel free to recommend some more:

    dynamically pulled from delicious. see more…

    Image sourced from onceuponageek but unsure of the original creator.

    yeoman and meteor

    | Comments

    Really really quick impressions of Yeoman and Meteor. I was initially going to include AngularJS, but I’ll leave that for a subsequent post since it’s a bit different.

    So I’m exploring. And deciding on a new project to start. Which means I want to get versed in some new technologies out there just to get a feel for what’s going on. I figured I should start with some of the more convenient framework generators, so here are some quick impressions of two I want to tinker with. The impressions are in no way complete, nor have I even touched the surface of everything they can do. So think of these as more my initial notes and part of the start of a process by which I’m hoping to pick a solid one and get going.

    Yeoman

    Site: http://yeoman.io

    A pretty nice client side framework builder. Run a simple command line and it’ll tell you what you’re missing, and how to install it (if you’re on Ubuntu, it gives you the apt-get commands, I’m assuming it’ll show you how for other systems as well). Run it right out of the box or init it with things like AngularJS, Bootstrap, backbone, underscore, jQuery or whatever your choosing (from Bower). It handles getting you setup, organized, laid out, and frameworked up with your choices so that you can just get going. Bonus is that since it includes node.js, you can quickly call up a server instance and see your code up and running live without having to muck with any additional server setup.

    It feels really nice to have all the hard work done for you in terms of setup. All the compression, parsing, transcoding, mixing, mashing… all done. I think there are still some small issues with AngularJS integration and Bootstrap (it doesn’t show Bootstrap loaded on the welcome page), and I’m not sure if it’s possible to tell it to use specific versions (like if I wanted to use AngularJS 1.1.1 instead of the stable branch), but it does what it needs. I’m definitely looking forward to trying it out a bit more and seeing if it gets rid of my setup headaches.

    Meteor

    Site: http://meteor.com

    Like Yeoman, Meteor is a framework builder. But it also adds mongoDB to the mix and gives you the complete stack. So it setups up the client side and then also setups the server side. The fun thing is, it links the two together so that you can easily access the server code and manipulate your data store right from the client console in your browser. It also handles things like response latency, which is just a fancy way of saying it updates the local UI with data prior to sending it over the wire to the database for proper storage. I’ll have to see how it handles cases of concurrent updates and how those are resolved when the local UI can be out of sync with the database information.

    But that aside, it’s also a nice setup that helps to take care of the full stack setup. As with Yeoman, it has customizable extension points to allow you to add your own favorites, even supporting AngularJS with a touch of extra work. This one I’m certainly keeping an eye out to see how useful it can be for just getting something quick and dirty up.

    Oh, an even more superb benefit is that they give you a subdomain on meteor.com that you can publish to so that you can immediately host your project. Right then and there. That’s really cool if you want to do something dirt simple and quick but not necessarily something that needs to live permanently. That, I think a lot more frameworks should offer.

    Thoughts

    I really love the direction these framework builders are taking. They seem to remove the headache of having to get everything right and just allow you to start building. But in the end they are still just frameworks, and you still need to build your content. What I would like to see with the next evolution is to help support generic application flow layouts to come prebuilt. I think this is something Meteor can benefit from immensely by offering to the developer a one command setup system to build a user based system application layout complete with login screen and twitter/facebook/google OAuth2 credential storage integration directly into the database.

    Update: accounts-ui package on Meteor seems to have this already. Nice. I’ll have to see if it does the trick. Update: yeoman-express-angularjs Looks like express.js support is in the works for an upcoming yeoman release.

    That would really be awesome. Having that taken care of, the developer can then really truly concentrate on just developing the meat of the application. I’m not sure if Yeoman is slated to go in that direction as it deems itself a client setup system, but it can still be nice to see the right page layouts and hooks all setup and ready for you to go. The less I have to think about how to do this and the more I can leverage the standard way to do it, the more time I’ll have to actually build my application and get things going.

    Looking forward to seeing how these framework generators evolve.

    aaron swartz

    | Comments

    I posted this initially on Google+, but expanding on it a little here.

    Sometimes the internet does do some good. Today, in memory of Aaron Swartz’s death, academics are taking time to freely “publish” their papers and make them available for the general public. You can follow the #pdftribute tag on Google+ and Twitter.

    It’s a good way to help spread information that is typically free to view and read, save for either restrictions due to copyright or distribution rights. Things that were what Aaron circumvented and ended up being charge for. More details are available on the corresponding Wikipedia page.

    I didn’t know much about Aaron, but the punishment just did not fit the crime. Anyone could see that. A million dollars and a 35 year jail sentence for the release of publications that were otherwise available to the [academic] public (just not as a mass archive). Academic papers that, being academic, were supposed to be for the enlightenment and education of others, not to be tucked away behind publication paywalls and restricted repositories. A punishment of this magnitude can certainly put a weight on anyone’s mind if they were being prosecuted for what was deemed an acceptable thing to do, albeit close to the line.

    But this is not what bothers me the most.

    It’s just unfortunate, that as usual, it took a life before this was realized. And as great as this #pdftribute movement is, it really could have made a significant difference to have done this for Aaron while he was alive. To be able to show active, not reactive support. Obviously the movement shows that a supportive community exists, but only after the fact.

    This isn’t a lesson to teach to the world about Aaron and the inequities of the justice system. There will always be “Aaron”s and there will always be overzealous prosecutions. Laws can be changed and attitudes turned, but something unfortunately always needs to trigger it to actually make it happen.

    No, this incident is a lesson for us.

    This is a lesson to teach to us that if we continue to ignore these things as just a problem that someone else has, we’re going to end up with more reactionary efforts rather than proactive ones. We will continue to lose people that may make mistakes, but are in no way deserving of depression, punishment, or even death. It’s a problem we all have: apathy, turning the eye, someone else’s problem. Of course, I’m guilty of that to. This post is completely reactionary after all.

    But this is the exact mindset that needs to change. To see an injustice and to be able to do something proactive, not reactive, about it. Tributes are fine. They’re a way to show support. But they’re often just too late. We need to be able to see what’s happening around us now and change what we can when we can to spare the need for a tribute. It’s not an easy thing since people really are often driven by damage rather than motivation. But that really needs to change.

    This change in mindset, perhaps, is a much more fitting tribute for Aaron, and falls in line with what he was trying to accomplish.

    Image courtesy of Quinn Norton via The Verge.