6 stories
·
0 followers

Airbus Seeking Patent For Bicycle Seats In Plane Cabins Because Flying Isn’t Uncomfortable Enough Already

4 Comments

(Airbus)

(Airbus)

While you’re fighting for territory on the arm rest and suffering the kicks, nudges and otherwise annoying seat disturbances that come with flying commercial airlines, just think… it could be worse. How much worse? Like perching on a bicycle seat worse.

Airbus submitted a patent in Europe for the seats, with have small backrests but no tray tables or headrest, reports theLos Angeles Times. And legroom? Keep dreaming.

The pared down design is an attempt to cut down on bulk, which in turn allows for more sardined passengers and ostensibly, more money.

“In effect, to increase the number of cabin seats, the space allotted to each passenger must be reduced,” the patent application states.

Of course, just because aircraft manufacturer Airbus is seeking a patent for something that resembles a torture device, that doesn’t mean we’ll all be perched on hard, foldable seats anytime soon… right? After all, Airbus has said wider seats lead to happier customers.

“Many, if not most, of these concepts will never be developed, but in case the future of commercial aviation makes one of our patents relevant, our work is protected,” an Airbus spokeswoman explained. “Right now these patent filings are simply conceptual.”

*Thanks for the link, Thomas!

Airbus seeks patent for bicycle-like airline seat [Los Angeles Times]

Read the whole story
wundram
3576 days ago
reply
This actually looks more comfortable than the current airplane seats. There is no legroom problem because you are basically standing. And the person in front of you can't crush your legs with the seat back. For a 1-2 hour flight, this would be better.
Share this story
Delete
3 public comments
cratliff
3577 days ago
reply
I feel like I've just seen my future...
South Portland, ME
BiG_E_DuB
3577 days ago
reply
Lolol wtf
Charlotte, NC, USA
farmjope
3577 days ago
reply
wow...

Waterfagile

1 Share

Some folks use Waterfall. Some use Agile. A. W.'s team uses Waterfagile. Now you might ask, wtf is waterfagile? Well...fasten your seatbelts...

A. W.'s manager was charged with designing a black-box replacement for an existing system that was over-engineered, over-complicated, over-interfaced, over-configured and utterly incomprehensible. The highly paid consultants who wrote it have all long been let go. Before they could write up user guides, developer guides, architecture diagrams, or pretty much anything else. All of the records of the licenses for the third party libraries used by the project were lost or misplaced. When something went wrong, the only plausible answer was: sorry, we can't fix it, so incur the loss.

The new system had to be bullet-proof. It had to be scalable by 4 orders of magnitude. It had to crunch all that additional data in less time than the current system. It had to be fully configurable, so users could enter a new task-name in a web form, and the application would magically perform the described action.

Naturally, the developers pushed back to dial down the lunacy to the realm of merely theoretically plausible.

Faced with a near revolt on his hands, the manager decided that the best way to handle all of this was to combine the best attributes of waterfall and agile methodologies. To this end, he had A. W. and another architect designing the main features and control structures. Then he had the junior developers following behind them doing the implementations. However, since there were so many tasks to do, he would hold a special scrum with the architects every three weeks to plan out the next sprint. He would essentially go down the list of tasks in the waterfall project plan, and dole them out based upon the number of available hours for each person.

However, he had each person available at 90%. Hmmm, that means 4 hours of other stuff each week. However, he scheduled each person with 10-12 hours of meetings each week, including the daily 60 minute scrum, during which each of 12 developers would spend 5 minutes talking about what was, what will be, and blockages. When the consultants started rolling up the billable hours, he ordered them to limit it to 40 hours per week. Of course, then the work wasn't being finished on time.

After numerous meetings to discuss why work wasn't being finished in the allotted time, he was finally convinced that the excess of meetings, limited hours and basic arithmetic added up to the problem. His solution? Work more hours but don't bill for them. The consultants in Kerbleckistan knuckled under and did it. The more senior folks in the office decided that if they couldn't leave after 40 hours and they weren't being paid for over 40 hours, that they'd just run all of their errands during the day.

This went back and forth for a while until they finally tried to run the very first load test for the new software. It turned out that 84% of the CPU time was spent in one routine. But this was no ordinary routine. No. It was the access control list to set state variables. You see, it was important to control who could call the public setter for each state variable in each class. Lists of classes and methods were mapped to each state variable in each class. Thus, instead of simply setting a state variable, each setter had to first call a routine that would look up a class, then the state variable in that class, and then look to see if the class that wanted to change it had permission, and then if the method in that class that actually wanted to call the setStateXyz(...) method had permission to call that method. All of that just to set a variable. Each and every time. Billions of times.

The rewrite is now undergoing a rewrite.

[Advertisement] BuildMaster 4.1 has arrived! Check out the new Script Repository feature and see how you can deploy builds from TFS (and other CI) to your own servers, the cloud, and more.
Read the whole story
wundram
3644 days ago
reply
Share this story
Delete

Git Submodules: Core Concept, Workflows And Tips

1 Comment

Including submodules as part of your Git development allows you to include other projects in your codebase, keeping their history separate but synchronized with yours. It’s a convenient way to solve the vendor library and dependency problems. As usual with everything git, the approach is opinionated and encourages a bit of study before it can be used proficiently. There is already good and detailed information about submodules out and about so I won’t rehash things. What I’ll do here is share some interesting things that will help you make the most of this feature.

Table Of Contents

  1. Core Concept
  2. Possible Workflows
  3. Useful Tips Incoming
  4. How to swap a git submodule with your own fork
  5. How do I remove a submodule?
  6. How do I integrate a submodule back into my project?
  7. How to ignore changes in submodules
  8. Danger Zone! Pitfalls Interacting with Remotes
  9. Conclusions

Core Concept

First, let me provide a brief explanation on a core concept about submodules that will make them easier to work with.

Submodules are tracked by the exact commit specified in the parent project, not a branch, a ref, or any other symbolic reference.

They are never automatically updated when the repository specified by the submodule is updated, only when the parent project itself is updated. As very clearly expressed in the Pro Git chapter mentioned earlier:

When you make changes and commit in that \[submodule\] subdirectory, the
superproject notices that the HEAD there has changed and records the exact
commit you’re currently working off of; that way, when others clone this
project, they can re-create the environment exactly.

Or in other words :

\[...\] git submodules \[...\] are static. Very static. You are tracking
specific commits with git submodules – not branches, not references, a single
commit. If you add commits to a submodule, the parent project won’t know. If
you have a bunch of forks of a module, git submodules don’t care. You have
one remote repository, and you point to a single commit. Until you update
the parent project, nothing changes.

Possible Workflows

By remembering this core concept and reflecting on it, you can understand that submodule support some workflows well and less optimally others. There are at least three scenarios where submodules are a fair choice:

  • When a component or subproject is changing too fast or upcoming changes will break the API, you can lock the code to a specific commit for your own safety.

  • When you have a component that isn’t updated very often and you want to track it as a vendor dependency. I do this for my vim plugins for example.

  • When you are delegating a piece of the project to a third party and you want to integrate their work at a specific time or release. Again this works when updates are not too frequent.

Credit to finch for the well-explained scenarios.

Useful Tips Incoming

The submodule infrastructure is powerful and allows for useful separation and integration of codebases. There are however simple operations that do not have a streamlined procedure or strong command line user interface support.

If you use git submodules in your project you either have run into these or you will. When that happens you will have to look the solution up. Again and again. Let me save you research time: Instapaper, Evernote or old school bookmark this page (:D:D) and you will be set for a while.

So, here is what I have for you:

How to swap a git submodule with your own fork

This is a very common workflow: you start using someone else’s project as submodule but then after a while you find the need to customize it and tweak it yourself, so you want to fork the project and replace the submodule with your own fork. How is that done?

The submodules are stored in .gitmodules:


1
2
3
4
$ cat .gitmodules
[submodule "ext/google-maps"]
    path = ext/google-maps
    url = git://git.naquadah.org/google-maps.git

You can just edit the url with a text editor and then run the following:


1
$ git submodule sync

This updates .git/config which contains a copy of this submodule list (you could also just edit the relevant [submodule] section of .git/config manually).

(Stack Overflow reference)

How do I remove a submodule?

It is a fairly common need but has a slightly convoluted procedure. To remove a submodule you need to:

  1. Delete the relevant line from the .gitmodules file.
  2. Delete the relevant section from .git/config.
  3. Run git rm –cached path_to_submodule (no trailing slash).
  4. Commit and delete the now untracked submodule files.

(Stack Overflow reference)

How do I integrate a submodule back into my project?

Or, in other words, how do I un-submodule a git submodule? If all you want is to put your submodule code into the main repository, you just need to remove the submodule and re-add the files into the main repo:

  1. Delete the reference to the submodule from the index, but keep the files:
    
    
    1
    git rm --cached submodule_path (no trailing slash)
  2. Delete the .gitmodules file or if you have more than one submodules
    edit this file removing the submodule from the list:

    
    
    1
    git rm .gitmodules
  3. Remove the .git metadata folder (make sure you have backup of this):
    
    
    1
    rm -rf submodule_path/.git
  4. Add the submodule to the main repository index:
    
    
    1
    2
    git add submodule_path
    git commit -m "remove submodule"

NOTE: The procedure outlined above is destructive for the history of the submodule, in cases where you want to retain a congruent history of your submodules you have to work through a fancy “merge”. For more details I defer you to this very complete Stack Overflow reference.

How to ignore changes in submodules

Sometimes your submodules might become dirty by themselves. For example if you use git submodules to track your vim plugins, they might generate or modify local files like helptags. Unfortunately, git status will start to annoy you about those changes, even though you are not interested in them at all, and you have no intention of committing them.

The solution is very simple. Open the file .gitmodules at the root of your repository and for each submodule you want to ignore add ignore = dirty, like in this example:


1
2
3
4
[submodule ".vim/bundle/msanders-snipmate"]
  path = .vim/bundle/msanders-snipmate
  url = git://github.com/msanders/snipmate.vim.git
  ignore = dirty

Thanks to Nils for the great explanation.

Danger Zone! Pitfalls Interacting with Remotes

As the Git Submodule Tutorial on kernel.org reminds us there are a few important things to note when interacting with your remote repositories.

The first is to always publish the submodule change before publishing the change to the superproject that references it. This is critical as it may hamper others from cloning the repository.

The second is to always remember to commit all your changes before running git submodule update as if there are changes they will be overwritten!

Conclusions

Armed with these notes you should be able to tackle many common recurring workflows that come up when using submodules. In a future post I will write about alternatives to git submodule.

Follow me @durdn and the awesome @AtlDevtools team for more DVCS rocking.

Read the whole story
wundram
4051 days ago
reply
more on git. submodules
Share this story
Delete

Microsoft Windows Gets More Love From Git

1 Comment and 2 Shares
As of Tuesday, Windows developers can download the public beta of SourceTree, a visual tool for working with GitHub, BitBucket, Stash or any other code repository based on Git or Mercurial.
Read the whole story
wundram
4059 days ago
reply
better windows git client.
Share this story
Delete

Game Fight!: Sim City 5 vs. Gamers

1 Comment
Man oh man, do I want to get some city building on, and I want to get it on, like, NOW. So you can imagine how excited I was to learn about a new iteration of SimCity, the city simulator. It takes all the addictiveness of cocaine but makes it cheaper and arguably less detrimental to a healthy heart. The reviews came out and the game was deemed "GREAT" by excited city-building nerds everywhere.

Kids and adults alike waited for the game to be released so they could join in the fun of building and destroying cities; cities filled with people like you and me. Maybe those people in the game are sitting in their virtual apartments, on their virtual computers, playing a virtual city building simulation. Or maybe WE'RE the SIMULATION! Is your mind blown yet? Good. Because now we have to go into the dimension that exists on the side opposite the screen to your virtual city: reality.

Do you remember that 1990s Winona Rider movie "Reality Bites?" Now it's more than just the name of a coming-of-age movie, it describes the difference between the game at review time and the game when released. In case you haven't heard, SimCity is a super great game that no one was able to play at first. In an effort to curb piracy, to play SimCity you need to be always online. Being always online means there needs to be a warehouse full of servers, chugging along, powered by coal probably, or maybe baby souls. These servers exist to verify that the copy of the game is authentic.

This is going to get a bit wonky, so bear with me. Basically, the technology works like this: When a player switches on their game, it sends a message to one of the server computers. Inside the message is a note that is sealed with wax. The server then inspects the wax seal for authenticity and examines it for signs of tampering. If it is determined that it is both authentic and not tampered with, the server reads the note, which says "I am a real copy." This satisfies the server, which then sends back a message to the client computer that says "You are allowed to play your game now."

OPTIONAL DESCRIPTIVE TITLE FOR IMAGE

The problem was, when everyone turned on their computers to play SimCity, there were too many of these wax sealed messages arriving at the servers, and so the servers demanded better treatment and went on strike. This left players unable to play the game at all. Russ Pitts, Features Editor and Co-Founder of Polygon.com, explained it to me much more eloquently. He made no mention of the wax-stamp authenticating algorithm.

"The server failures (as far as we know now, based on a handful of conversations with the company) have very little to do with DRM tech and very much to do with plain, old, boring server supply and demand." Supply and demand being a metaphorical way to describe wax seals.

Dang, he has a point. Basically the disastrous launch of SimCity and Polygon's now infamous reduction in review score (from a 9.5 pre-launch to a current 4) was caused by a failure to adequately support an always-online system of DRM. But DRM is the worst, right? "The reality is that games companies do not - and cannot - base their business model on expecting that 80% of their product will be consumed but not paid for. Especially in the case of online games that require constant server upkeep and maintenance."

He's right again. I'm starting to think that Russ Pitts is grounded in reality as far as his views on DRM are concerned, especially in pointing out that DRM has become a necessary evil. Arguments based on facts have NO PLACE in gaming discussions. This is Game Fight!

But if the DRM didn't exist, would these server problems even have been an issue? Maxis says it was their idea to implement such a system, so the problem with the servers falls on their heads, not EA. Which is a relief, because it's too easy to hate on EA these days. I imagine there are corporate people wearing suits whose job it is to figure out which path would cause the least amount of unclaimed revenue: launching with DRM and having the servers melt into lava from the stress, or no DRM at all, making paying customers appreciated but also uncommon.

In our conversation, Pitts points out that there are people who will never, ever accept DRM. Ever. And so no matter what happened with this launch, "for the anti-DRM camp, the mere existence of a DRM strategy is abhorrent. They were going to be angry about this issue whether the game functioned properly or not." Pitts posits that had the DRM worked as expected, "the mainstream consumers would probably have never noticed."

As for the game itself, Pitts said "I wish everyone who wanted to could be playing the same game I played for my review, including me. Maxis created a truly amazing experience. It was far better than I expected it to be. The fact that what is available now is not reflective of the experience I had is frustrating, because I'm still convinced there's a 9.5 game in there."

Man, Russ Pitts is good. I'll give him credit. You see, I was hoping I could spin this whole thing into an emotionally driven diatribe against EVERYTHING corporate, punctuating each scathing criticism of EA with with multiple punctuation marks, mostly exclamations and questions. But Pitts talks a lot of sense. Yeah, the DRM sucks forever, but companies lose money on PC games pretty hard because everyone just shares the floppies with their friends. I know there are a million excuses out there from the anti-DRM crowd ("I'll pay when you make a decent product, once decent enough that I'll stop paying nothing for it!"), but DRM is here to stay. Hopefully it will become increasingly less intrusive as time marches on. After the disaster of SimCity, I have a feeling it will. Eventually, we won't even notice DRM at all. The transition from where we are today to a future where we are all owned by multiple corporations who harvest our organs will be so gradual that it will be practically seamless.

I guess the loser of this Game Fight! is... well, me. This piece was supposed to be all salacious rumors and ridiculous hyperbole. Instead it's a look into how wax sealed envelopes help you play games. Point: SimCity.

Don't give up the Game Fight! The fray rages on in the Game Fight! archive.

Read the whole story
wundram
4060 days ago
reply
Nice comments on the DRM issue.
Share this story
Delete

Instant Java provisioning with Vagrant and Puppet: Stash one click install

1 Comment

Being an efficiency and productivity freak, I always try to streamline and automate repetitive tasks. As such, my antennas went up immediately when I started hearing about Provisioning frameworks; I began to incorporate them more and more in my development workflow. A perfect opportunity to take advantage of this came up while ramping up as Developer Advocate here at Atlassian.

Have you heard of Vagrant yet? It is awesome. Why? It automates much of the boilerplate work we developers have to endure while setting up our platforms and toolkits. So what does Vagrant do? In their words, it allows you to create and configure lightweight, reproducible, and portable development environments.

So what better testbed for this tool than the shiny new Stash 2.2 release?

Objective: provide me and fellow developers a (almost) one-click install for Stash.

Alright, Alright I say almost because you need just a few dependencies if you want to use a configuration/provisioning framework, specifically a recent version of VirtualBox, Vagrant and of course git.

First try out this magic for yourself and then I’ll walk you through some interesting details of the setup:

  1. Install VirtualBox and Vagrant and make sure you have git available.

  2. Open your favorite terminal and add a base virtual machine or provide your own:
    1
    vagrant box add base http://files.vagrantup.com/precise32.box
  3. Clone the stash-vagrant-install project by typing at your command line:
    1
    2
    3
    git clone https://bitbucket.org/durdn/stash-vagrant-install.git

    cd stash-vagrant-install
  4. Start up and provision automatically all dependencies in the vm:
    1
    vagrant up
  5. ??? There is no step 5. *** You’re DONE! ***

Note: be sure to let the process finish as it might take a while to download all the required packages.

After it finishes you will be able to access your brand new Stash installation with a browser at http://localhost:7990/setup. http://localhost:7990/setup

If you need to access the vm you can ssh into the box, you will find the stash installation in the /vagrant folder:

1
2
3
vagrant ssh

cd /vagrant

And if you need to start Stash manually you can just type:

1
STASH_HOME=/vagrant/stash-home /vagrant/atlassian-stash-2.2.0/bin/start-stash.sh

Under the hood

Now let me explain how all this works in some detail. Under the hood I used an absolutely basic Vagrant setup and a single Puppet manifest. Here is the Vagrantfile:

1
2
3
4
5
6
7
8
Vagrant::Config.run do |config|
config.vm.box = "base"
config.vm.forward_port 7990, 7990
config.vm.provision :puppet, :module_path => "modules" do |puppet|
puppet.manifests_path = "manifests"
puppet.manifest_file = "default.pp"
end
end

As you can see it only specifies the port forwarding for where Stash will run (port 7990) and Puppet as provisioning system. Nothing more.

Java Installation Blues

The only major requirement (and the complication) of this setup comes from the task of installing Java 7 and automatically accept the Oracle license terms. Java is not included in Ubuntu repositories for various licensing reasons therefore we have to cater for it.

First we need to instruct Puppet about apt; we do this by requiring the library:

1
include apt

This allows us to interact with Ubuntu packages in a more advanced fashion. Then we need to add a repository to the apt sources, one that includes the Java installer:

1
apt::ppa { "ppa:webupd8team/java": }

From there, update the apt infrastructure in two steps, first without the extra ppa repository and then with it:

1
2
3
4
5
6
7
8
9
exec { 'apt-get update':
command =❯ '/usr/bin/apt-get update',
before =❯ Apt::Ppa["ppa:webupd8team/java"],
}

exec { 'apt-get update 2':
command =❯ '/usr/bin/apt-get update',
require =❯ [ Apt::Ppa["ppa:webupd8team/java"], Package["git-core"] ],
}

After this we automatically accept the Java license:

1
2
3
4
5
6
7
8
9
exec {
"accept_license":
command =❯ "echo debconf shared/accepted-oracle-license-v1-1 select true | sudo debconf-set-selections && echo debconf shared/accepted-oracle-license-v1-1 seen true | sudo debconf-set-selections",
cwd =❯ "/home/vagrant",
user =❯ "vagrant",
path =❯ "/usr/bin/:/bin/",
before =❯ Package["oracle-java7-installer"],
logoutput =❯ true,
}

Downloading and Running Stash

The rest is about downloading the Stash installation file:

1
2
3
4
5
6
7
8
9
10
exec {
"download_stash":
command =❯ "curl -L http://www.atlassian.com/software/stash/downloads/binary/atlassian-stash-2.2.0.tar.gz | tar zx",
cwd =❯ "/vagrant",
user =❯ "vagrant",
path =❯ "/usr/bin/:/bin/",
require =❯ Exec["accept_license"],
logoutput =❯ true,
creates =❯ "/vagrant/atlassian-stash-2.2.0",
}

Creating its home folder:

1
2
3
4
5
6
7
8
9
10
exec {
"create_stash_home":
command =❯ "mkdir -p /vagrant/stash-home",
cwd =❯ "/vagrant",
user =❯ "vagrant",
path =❯ "/usr/bin/:/bin/",
require =❯ Exec["download_stash"],
logoutput =❯ true,
creates =❯ "/vagrant/stash-home",
}

And kicking it off in the background:

1
2
3
4
5
6
7
8
9
10
11
12
13
exec {
"start_stash_in_background":
environment =❯ "STASH_HOME=/vagrant/stash-home",
command =❯ "/vagrant/atlassian-stash-2.2.0/bin/start-stash.sh &",
cwd =❯ "/vagrant",
user =❯ "vagrant",
path =❯ "/usr/bin/:/bin/",
require =❯ [ Package["oracle-java7-installer"],
Exec["accept_license"],
Exec["download_stash"],
Exec["create_stash_home"] ],
logoutput =❯ true,
}

Now we have a system that has all the required packages ready for Stash to run and that actually kicks it off in the background for you. Pretty awesome!

If you are interested in learning more check out the Puppet manifest to see all the magic in context.

Conclusions

In conclusion: Vagrant and Puppet rock and can help any coder or system administrator to assemble development boxes easily. This is great when evaluating solutions or when providing complete setups with all the required dependencies. Oh and don’t forget to try Stash 2.2 out!

Read the whole story
wundram
4061 days ago
reply
This puppet thing looks like it could be really good for configuring servers from the build. Does it work well with windows?
Share this story
Delete