tag:blogger.com,1999:blog-3603291200743583642024-03-14T05:55:05.517+02:00MinimalLess is moreJurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.comBlogger77125tag:blogger.com,1999:blog-360329120074358364.post-633634245029048542017-02-22T16:36:00.001+02:002017-02-22T16:36:25.929+02:00SuddenlyI decided to start writing again, but I moved to github pages.<br />
Follow me on <a href="https://dracoater.github.io">https://dracoater.github.io</a>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-12325152087087808252013-12-14T15:47:00.000+02:002014-01-04T13:27:58.271+02:00Testing Chef Cookbooks. Part 2.5. Speeding Up ChefSpec Run<div dir="ltr" style="text-align: left;" trbidi="on"><b>Disclaimer</b>: as of Jan 4 2014, this is already <a class="external" href="https://github.com/sethvargo/chefspec#faster-specs">implemented inside</a> ChefSpec (>= 3.1.2), so you don't have to do anything. The post just describes the problem and solution with more details.<br />
<br />
Last time we were speaking about testing Chef recipes, <a href="http://dracoater.blogspot.com/2013/09/testing-chef-cookbooks-part-2-chefspec.html">I introduced to you ChefSpec</a> as a very good tool for running unit tests on your cookbooks. But lately I encountered a problem with it. I have over 800 unit tests (aka <i>examples</i> in RSpec world) and now it takes about 20 minutes to run. <b>20 minutes!!!</b> That's a extremely long time for this kind of task. So I decided to delve, what exactly is responsible for taking so much time.<br />
<br />
My examples look like that (many recipes have similar example groups for <i>windows</i> and <i>mac_os_x</i>):<br />
<br />
<pre><code class="ruby">describe "example::default" do
context 'ubuntu' do
subject { ChefSpec::ChefRunner.new( :platform => 'ubuntu', :version => '12.04' ).converge described_recipe }
let( :node ) { subject.node }
it { should do_something }
it 'does some other thing' do
should do_another_thing
end
it { should do_much_more }
end
end</code></pre><br />
I put some printouts inside <i>describe</i>, <i>context</i>, <i>subject</i> and <i>let</i> blocks, as well as read RSpec documentation about <i>let</i> and <i>subject</i>. Turned out, that <i>subject</i> and <i>let</i> blocks are called for every test, i.e. they are cached when accessed inside 1 test (<i>it</i> block), but not across tests inside test group (in our case <i>ubuntu</i> context). So for these tests <i>subject</i> is actually calculated 3 times. That is not a problem for ordinary RSpec tests, where subject most of the time is an object returned by constructor, e.g. <code>User.new</code>. But in ChefSpec case we have a <i>converge</i> operation as Subject under Test (SuT), which is more costly and takes more time to calculate. Another difference is that, opposing to ordinary RSpec tests we do not change the SuT in ChefSpec, but just make sure that it has right resources with right actions. So running <i>converge</i> for every example is a huge overhead.<br />
<br />
How can we fix that? Well, obviously we should somehow save the value across the examples. I tried different approaches, some of them worked partially, some didn't at all. The simplest thing was to use <i>before :all</i> block.<br />
<br />
<pre><code class="ruby">describe "example::default" do
context 'ubuntu' do
before :all { @chef_run = ChefSpec::ChefRunner.new( :platform => 'ubuntu', :version => '12.04' ).converge described_recipe }
subject { @chef_run }
[...]
end
end</code></pre><br />
It does not require any more than small change in spec files, but the drawback of this approach is <b>no mocking is supported in <i>before :all</i> block</b>. So if you have to mock for example file existence, <b>it would not work</b>:<br />
<br />
<pre><code class="ruby">describe "example::default" do
context 'ubuntu' do
before :all do
::File.stub( :exists? ).with( '/some/path/' ).and_return false
@chef_run = ChefSpec::ChefRunner.new( :platform => 'ubuntu', :version => '12.04' ).converge described_recipe
end
subject { @chef_run }
[...]
end
end</code></pre><br />
<a class="external" href="https://www.relishapp.com/rspec/rspec-core/v/3-0/docs/helper-methods/define-helper-methods-in-a-module">RSpec allows to extend modules with your own methods</a> and the idea was to write method similar to <i>let</i>, but which will cache the results across examples too. Create a <i>spec_helper.rb</i> file somewhere in your Chef project and add the following lines there:<br />
<br />
<pre><code class="ruby">module SpecHelper
@@cache = {}
FINALIZER = lambda {|id| @@cache.delete id }
def shared( name, &block )
location = ancestors.first.metadata[:example_group][:location]
define_method( name ) do
unless @@cache.has_key? Thread.current.object_id
ObjectSpace.define_finalizer Thread.current, FINALIZER
end
@@cache[Thread.current.object_id] ||= {}
@@cache[Thread.current.object_id][location + name.to_s] ||= instance_eval( &block )
end
end
def shared!( name, &block )
shared name, &block
before { __send__ name }
end
end
RSpec.configure do |config|
config.extend SpecHelper
end</code></pre><br />
Values from <i>@@cache</i> are never deleted, and you can use same names with this block, so I also use <i>location</i> of the usage, which looks like that: "./cookbooks/my_cookbook/spec/default_spec.rb:3". Now change <i>subject</i> into <i>shared( :subject )</i> in your specs:<br />
<br />
<pre><code class="ruby">describe "example::default" do
context 'ubuntu' do
shared( :subject ) { ChefSpec::ChefRunner.new( :platform => 'ubuntu', :version => '12.04' ).converge described_recipe }
[...]
end
end</code></pre><br />
And when running the tests you will now have to include the spec_helper.rb too:<br />
<br />
<pre><code class="shell">rspec --include ./relative/path/spec_helper.rb cookbooks/*/spec/*_spec.rb</code></pre><br />
If you use the rake task I introduced in <a href="http://dracoater.blogspot.com/2013/09/testing-chef-cookbooks-part-2-chefspec.html">previous post</a>, add the following line to it.<br />
<br />
<pre><code class="ruby">desc 'Runs specs with chefspec.'
RSpec::Core::RakeTask.new :spec, [:cookbook, :recipe, :output_file] do |t, args|
[...]
t.rspec_opts += ' --require ./relative/path/spec_helper.rb'
[...]
end</code></pre><br />
And that's all! Now tests run in 2 minutes. 10 times faster!</div>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-61214178870196893432013-10-02T18:26:00.000+03:002013-10-03T14:13:46.682+03:00Extracting Files From tar.gz With Ruby<div dir="ltr" style="text-align: left;" trbidi="on">I always thought that it should be a trivial task. There are even some <a href="http://stackoverflow.com/search?q=ruby+tar+gzip" class="external">stackoverflow</a> answers on that topic, but there is actually a catch that none of the answers talks about. Originally <a href="http://en.wikipedia.org/wiki/Tar_%28computing%29" class="external">tar did not support paths longer than 100 chars</a>. GNU tar is better and they implemented support for longer paths, but it was made through a <i>hack</i> called <a href="http://stackoverflow.com/q/2078778/170230" class="external"><code>././@LongLink</code></a>. Shortly speaking, if you stumble upon an entry in tar archive which path equals to above mentioned <code>././@LongLink</code>, that means that the following entry path is longer than 100 chars and is truncated. The full path of the following entry is actually the value of the current entry. So when extracting files from tar we also must have in mind this possibility.<br />
<pre><code class="ruby">require 'rubygems/package'
require 'zlib'
TAR_LONGLINK = '././@LongLink'
tar_gz_archive = '/path/to/archive.tar.gz'
destination = '/where/extract/to'
Gem::Package::TarReader.new( Zlib::GzipReader.open tar_gz_archive ) do |tar|
dest = nil
tar.each do |entry|
if entry.full_name == TAR_LONGLINK
dest = File.join destination, entry.read.strip
next
end
dest ||= File.join destination, entry.full_name
if entry.directory?
FileUtils.rm_rf dest unless File.directory? dest
FileUtils.mkdir_p dest, :mode => entry.header.mode, :verbose => false
elsif entry.file?
FileUtils.rm_rf dest unless File.file? dest
File.open dest, "wb" do |f|
f.print entry.read
end
FileUtils.chmod entry.header.mode, dest, :verbose => false
elsif entry.header.typeflag == '2' #Symlink!
File.symlink entry.header.linkname, dest
end
dest = nil
end
end</code></pre></div>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-79013097789494727522013-09-23T14:49:00.000+03:002013-12-12T22:39:29.468+02:00Testing Chef Cookbooks. Part 2. Chefspec.<div dir="ltr" style="text-align: left;" trbidi="on">So now you have <a href="http://dracoater.blogspot.com/2013/09/testing-chef-cookbooks-part-1-foodcritic.html">less errors and typos in your cookbooks</a>, thanks to <a class="external" href="http://acrmp.github.io/foodcritic/">foodcritic</a>. But you are still far from confident that your cookbook will not fail to run on some node. Next step for acquiring it is unit tests (aka specs in ruby world).<br />
<br />
Ruby has already a great spec library useful for unit testing every kind of project - it's called <a class="external" href="http://rspec.info/">rspec</a>. Many specialized unit test libraries are based on it and so is <a class="external" href="https://github.com/acrmp/chefspec">chefspec</a> - the gem to write unit tests for your cookbooks.<br />
<br />
Chefspec makes it easy to write unit tests for Chef recipes, get feedback fast on changes in cookbooks. So first let's install it.<br />
<br />
<pre><code class="bash">sudo gem install rake chefspec --no-ri --no-rdoc</code></pre><br />
This will also add <code>create_specs</code> command to <code>knife</code>, which creates specs for particular existing cookbook:<br />
<br />
<pre><code class="bash">knife cookbook create_specs my_cookbook</code></pre><br />
After this you will get a separate <code>*_spec.rb</code> file in <code>my_cookbook/specs/</code> for every recipe file. Chefspec readme has very good examples teaching how to write tests. A couple of things I personally do different is I use <code>subject</code> and <code>should</code> instead of <code>let(:chef_run)</code> and <code>expect(chef_run).to</code>, because it allows to omit subject in some cases: (Read why <a class="external" href="http://myronmars.to/n/dev-blog/2012/06/rspecs-new-expectation-syntax">RSpec developers actually recommend using expect_to syntax</a>)<br />
<br />
<pre><code class="ruby">#Chefspec recommendations
describe "example::default" do
let( :chef_run ){ ChefSpec::ChefRunner.new.converge described_recipe }
it { expect(chef_run).to do_something }
it 'does some other thing' do
expect(chef_run).to do_another_thing
end
end
#My typical specs
describe "example::default" do
subject { ChefSpec::ChefRunner.new.converge described_recipe }
it { should do_something }
it 'does some other thing' do
should do_another_thing
end
end</code></pre><br />
We can also integrate it with Jenkins by making rspec output results in JUnit xml format that Jenkins understands. We need another gem for that: <br />
<pre><code class="bash">sudo gem install rake rspec_junit_formatter --no-ri --no-rdoc</code></pre>Now we can run rspec with the following parameters and it will output test results into <code>test-results.xml</code>: <br />
<pre><code class="bash">rspec my_cookbook --format RspecJunitFormatter --out test-results.xml</code></pre>Rspec also supports rake, so it may be more convenient to use it to run specs on your cookbooks: <br />
<pre><code class="ruby">desc 'Runs specs with chefspec.'
RSpec::Core::RakeTask.new :spec, [:cookbook, :recipe, :output_file] do |t, args|
args.with_defaults( :cookbook => '*', :recipe => '*', :output_file => nil )
t.verbose = false
t.fail_on_error = false
t.rspec_opts = args.output_file.nil? ? '--format d' : "--format RspecJunitFormatter --out #{args.output_file}"
t.ruby_opts = '-W0' #it supports ruby options too
t.pattern = "cookbooks/#{args.cookbook}/spec/#{args.recipe}_spec.rb"
end</code></pre></div>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-35950629723628141032013-09-17T11:41:00.000+03:002013-09-23T14:50:13.134+03:00Testing Chef Cookbooks. Part 1. Foodcritic.<div dir="ltr" style="text-align: left;" trbidi="on">This post have been in draft for almost a year. At last I have made myself to turn back to my blog and continue writing.<br />
<br />
A couple of years ago we have adopted automating our server configuration through chef recipes. In the beginning we didn't have many cookbooks and all the recipes were more or less simple. But as time passed new applications had to be installed and configured, including some not so simple scenarios. Turned out that "hit and miss" method wasn't too good. We came across a lot of errors, such as:<br />
<ul style="text-align: left;"><li>typo: simple as that. Although knife makes a syntax check on uploading cookbooks to server, but many times it was a wrong path or something like that, which could be revealed only after we tried to provision a node.</li>
<li>missed dependencies: we ran our scripts on ec2 instances, and to save time on bootstrapping the new instance, we didn't do it every time we changed something in our scripts, but only the first time. When we finally got the recipe working on this node, sometimes it turned out that the recipe will not be working on another clean node, because we forgot some dependencies, that were somehow already installed on our test node.</li>
<li>interfering applications: some recipes were used for configuring several applications. Although they were very similar, they were not identical, and sometimes changing the recipe for installing one application broke installation of the other one.</li>
</ul><br />
At last we came to the idea, that infrastructure code as any other code should be tested. Currently we have built a pipeline using Jenkins that first runs syntax check, then coding conventions tests, then unit tests and finally integration tests on cookbooks. And only if all the tests are passing, Jenkins runs <code>knife cookbook upload</code>, publishing the cookbooks on chef-server.<br />
<br />
In my following posts I will share with you how we established this testing architecture for our chef recipes. There will 3 parts in the tutorial, we will start from the easiest checks that only make sure that your ruby code can be parsed and follows some code conventions.<br />
<br />
First of all make sure you are running ruby 1.9.2 or newer. (We use Ubuntu on our linux servers, so all the code I provide is tested in Ubuntu 12.04 LTS.)<br />
<pre><code class="bash">sudo aptitude install ruby1.9.3</code></pre>Now we need <code><a class="external" href="http://rubygems.org/gems/rake/">rake</a></code> for building our project and <code><a class="external" href="http://acrmp.github.com/foodcritic/">foodcritic</a></code> for testing.<br />
<pre><code class="bash">sudo gem install rake foodcritic --no-ri --no-rdoc</code></pre>Chef cookbook repository has a <a class="external" href="https://github.com/opscode/chef-repo/blob/master/Rakefile">Rakefile</a> in it. Now when we have rake installed, we can try and run rake inside the folder Rakefile is in.<br />
<pre><code class="bash">rake</code></pre>This should run syntax check on your recipes, templates and files inside cookbooks. Next we can try to run foodcritic tests on your cookbooks. Type<br />
<pre><code class="bash">foodcritic cookbooks</code></pre>and hit enter (assuming that "cookbooks" is the folder your cookbooks are in). If it founds some warnings, it will print them out, otherwise an empty string will be printed.<br />
<br />
Of course you may not agree with some of the rules and would like to ignore them. This could be achieved by providing additional options to <code>foodcritic</code> command. You can also write your own rules and add them into checks. For example, there was a <a class="external" href="http://acrmp.github.io/foodcritic/#FC001">FC001</a> rule, which stated "Use strings in preference to symbols to access node attributes". I actually preferred vice versa using symbols to strings. So I created a new rule:<br />
<pre><code class="ruby">rule "JT001", "Use symbols in preference to strings to access node attributes" do
tags %w{style attributes jt}
recipe do |ast|
attribute_access( ast, :type => :string )
end
end
</code></pre>and saved it into <code>foodcritic-rules.rb</code> file. Then it was easy to disable the existing FC001 rule and enable mine with:<br />
<pre><code class="bash">foodcritic cookbooks --include foodcritic-rules.rb --tags ~FC001</code></pre>There are also some <a class="external" href="https://github.com/etsy/foodcritic-rules">3<sup>rd</sup> party rules</a> available, so you have something to start with.<br />
<br />
Now we will join rake and foodcritic together by creating a rake task that runs foodcritic tests. Add a new rake task similar to:<br />
<pre><code class="ruby">desc "Runs foodcritic linter"
task :foodcritic do
if Gem::Version.new("1.9.2") <= Gem::Version.new(RUBY_VERSION.dup)
sh "foodcritic cookbooks --epic-fail correctness"
else
puts "WARN: foodcritic run is skipped as Ruby #{RUBY_VERSION} is < 1.9.2."
end
end
task :default => 'foodcritic'</code></pre>Now when you run <code class="bash">rake</code> it should run both syntax and code conventions tests.<br />
<br />
<a href="http://dracoater.blogspot.com/2013/09/testing-chef-cookbooks-part-2-chefspec.html">Next post will be about unit tests</a> using <a class="external" href="https://github.com/acrmp/chefspec">chefspec</a>.</div>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-22726939892861763202013-04-08T16:35:00.001+03:002013-04-08T16:35:05.412+03:00Bootstrapping ChefJust an <a href="http://zeroturnaround.com/labs/pragmatic-devops-bootstrapping-chef/" class="external" target="_blank">introduction post about Chef</a>.<br />
Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-71670693401924746322013-04-03T11:26:00.000+03:002013-04-03T11:26:59.472+03:00Jenkins Plugins. Again.<div dir="ltr" style="text-align: left;" trbidi="on">
We decided to review our <a href="http://dracoater.blogspot.com/2011/08/top-10-jenkins-must-have-plugins.html" target="_blank">older post about Jenkins plugins</a> and <a class="external" href="http://zeroturnaround.com/labs/jenkins-protip-update-your-ci-environment-with-new-plugins" target="_blank">introduce some more interesting plugins to use with Jenkins</a>. </div>
Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-32784699630209115782012-09-10T11:46:00.001+03:002012-09-10T11:56:30.986+03:00How to Run Dynamic Cloud Tests with 800 Tomcats, Amazon EC2, Jenkins and LiveRebel<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;">
<img alt="" class="align-left" height="240" src="http://images.sodahead.com/polls/000513339/polls_onemillioncats1_5840_922377_poll_xlarge.jpeg" style="float: left; margin: 5px;" width="320" /></div>
<br />
I was brainstorming in the shower the other day, and I thought "Eureka!" - I need to bootstrap and test my Java app on a dynamic cluster with 800 Tomcat servers right now! Then, breakfast.<br />
<br />
Obviously, every now and then you need to build a dynamic cluster of 800 Tomcat machines and then run some tests. Oh, wait, you don’t? Well, lets say you do. Provisioning your machines on the cloud for testing is a great way to "exercise" your app and work on:<br />
<ul>
<li><b>Warming up</b>: Bootstrap a clean slate, install the software, run your tests</li>
<li><b>Checking your Processes</b>: Smoke testing for deploying the app to production</li>
<li><b>Ensuring success</b>: Checking load handling before launching the application to real clientelle</li>
<li><b>Leaving nothing behind</b>: After you've got all green lights, shut it all down and watch it disappear</li>
</ul>
<br />
At ZeroTurnaround, we need this for testing <a class="external" href="http://liverebel.com/">LiveRebel</a> with larger deployments. LiveRebel is a tool to manage JEE production and QA deployments and online updates. It is crucial that we support large clusters of machines. Testing such environments is not an easy task but luckily in 2012 it is not about buying 800 machines but only provisioning them in the cloud for some limited time. In this article I will walk you through setting up a dynamic Tomcat cluster and running some simple tests on them. (Quick note: When I started writing this article, we had only tested this out with 100 Tomcat machines, but since then we grew to be able to support 800 instances with LiveRebel and the other tools).<br />
<br />
<h3 dir="ltr">
Technical Requirements</h3>
Let me define a bunch of non-functional requirements that I've thought up. The end result should have 800 Tomcat nodes, each configured with LiveRebel. A load balancer should sit in front of the nodes and provide a single URL for tests, and we'll use the LiveRebel Command Center to manage our deployments.<br />
<br />
Naturally, this is all easier said than done. The hurdles that we will need to overcome to achieve this are:<br />
<ul>
<li>Provisioning all the nodes - starting/stopping Amazon EC2 instances</li>
<li>Installing required software - Java, Tomcat, LiveRebel</li>
<li>Configuring Tomcat cluster - (think jvmRoute param in <code>conf/server.xml</code>)</li>
<li>Configuring a load balancer (Apache) with all the provisioned IP addresses</li>
<li>Automation - one-click provision/start/stop/terminate on the cluster using Jenkins</li>
</ul>
<h3 dir="ltr">
Tools</h3>
<div dir="ltr">
We chose Amazon AWS as our cloud provider, namely because we've become familiar with them over the last couple years. For provisioning we use <a class="external" href="http://wiki.opscode.com/display/chef/Knife">Knife</a> and for configuration management we like <a class="external" href="http://www.opscode.com/chef/">Chef</a>. For automation, we went with <a class="external" href="http://jenkins-ci.org/">Jenkins</a> (I love <a class="external" href="http://www.cloudbees.com/why-do-you-like-jenkins.cb">Jenkins</a>), and we have two jobs. One to start the cluster and one to stop the cluster. Tests are not automated at the moment. Before going further you have to have a Chef server running on some machine (it should not be necessarily your own workstation) and Knife installed and configured on your Jenkins machine.</div>
<br />
<h3 dir="ltr">
Architecture</h3>
<h4 style="text-align: left;">
Loadbalancer</h4>
<div dir="ltr">
First we have to create/launch a loadbalancer instance. Software to configure:</div>
<br />
<ul>
<li>Install Apache</li>
<li>Install/enable Apache loadbalancer module</li>
<li>Update Apache configuration</li>
<li>Install LiveRebel Command Center and start it (could be a separate machine but we’ll use this instance for 2 services)</li>
</ul>
<br />
The load balancer should check in with chef-server and provide his own IP address. LiveRebel Command Center should be running and accepting incoming requests on default port (9001).<br />
<h4 style="text-align: left;">
LiveRebel node(s)</h4>
<div dir="ltr">
As soon as the load balancer is ready we will create/launch nodes. Node instances need to:</div>
<br />
<ul>
<li>Install a Tomcat instance</li>
<li>Figure out the IP of the load balancer</li>
<li>Download lr-agent-installer.jar from the LiveRebel CC</li>
<li>Run it (<code class="shell">java -jar lr-agent-installer.jar</code>)</li>
<li>Start Tomcat</li>
</ul>
Again, when everything is done the node will check in with chef-server and provide its IP address.<br />
<br />
After all nodes are ready we must update the Apache load balancer configuration and provide all the IP addresses of the nodes. This is because of the architecture of the load balancer. It needs to know the IP addresses of the machines it balances.<br />
<br />
<h3 dir="ltr">
Code (The Fun Part)</h3>
<div dir="ltr">
As we are using Chef, the natural way to act is to create several cookbooks and a couple of roles, that will help us with configuration. There are four cookbooks in total, one for each application: Apache, lrcc, Tomcat and Java. You can <a class="external" href="https://github.com/DracoAter/lr-cluster">get familiar with them on Github</a>. The code is provided more just for information, because it will not run as is. There are some download links missing. Another thing is that it was tested only on Ubuntu, so if you are using some other distribution, you may need to tune it up.</div>
<div dir="ltr">
We are going to use Knife command line tool to start and bootstrap our instances. Don’t forget to <a class="external" href="https://github.com/opscode/knife-ec2/">install and configure Knife-EC2</a> rubygem. First step is to create the load balancer. Provided you have configured the Knife EC2 plugin and prepared the right AMI to launch (or use the default provided by Ubuntu) it is relatively easy, just run (with right parameters):</div>
<br />
<pre><code class="shell">knife ec2 server create --run-list role[lr-loadbalancer] --node-name ‘loadbalancer’</code></pre>
<br />
When the process finishes successfully you can go to https://your-server-address:9001/ and check if the LiveRebel is running. It should be, but you will have to register or provide a license file. If you already have a license file you can automate the registration step by copying the license into LiveRebel home folder in your cookbook. Another thing to check is - if the load balancer has registered with chef-server.<br />
<div dir="ltr">
Next step - creating lr-nodes. Your 800 nodes can be created using a similar Knife EC2 command run in loop:</div>
<br />
<pre><code class="shell">for i in {1..800} ; do
knife ec2 server create --run-list role[lr-node] --node-name ‘lr-node-$i’
done
</code></pre>
<br />
Everything is almost ready! All we need now is to create Jenkins jobs. The first one - we’ll name it lr-cluster-create - should run these 2 commands and start the cluster. And the other one lr-cluster-delete - stops it with these commands:<br />
<br />
<pre><code class="shell">ids=`knife ec2 server list | grep "lr-node|loadbalancer" | awk {'print $1'} | tr 'n' ' '`
knife ec2 server delete $ids --yes
knife node bulk delete "lr-node-.*" --yes
knife client bulk delete "lr-node-.*" --yes
knife node delete "loadbalancer" --yes
knife client delete "loadbalancer" --yes</code></pre>
<br />
<h3>
Conclusions</h3>
At this point, you should be well on your way towards bootstrapping a clean environment, installing, running your tests, checking load handling, and then you can shut it all down once you've seen everything working to your satisfaction.<br />
<br />
Your two Jenkins jobs are now able to spawn a dynamic Tomcat cluster. You can even parameterize your job and supply a number of nodes that you are interested in for a really dynamic cluster.<br />
<div dir="ltr">
One thing to note is that as in EC2, Amazon charges for EBS snapshots, so it is not very cost-effective to just stop the cluster. Termination here will save you some money, especially if you like bigger clusters.</div>
<div dir="ltr">
Another thing is provisioning. Parallel provisioning for the 800 nodes takes roughly 30 minutes. Starting a new instance from AMI takes some time, but most of it goes to bootstrapping the clean environment with the Chef installation and downloading packages and archives.</div>
<div dir="ltr">
Once you have the cluster started you still need to run tests. We test deploying, updating the whole cluster with LiveRebel. You could be testing your own WEB application and see how it handles the load.</div>
<div dir="ltr">
The next steps for us is to automate the test suite and have these large scales tests executed regularly. This will give us valuable feedback about releases in progress and their scalability.</div>
<div dir="ltr">
I hope this article has helped you get started with dynamic Tomcat clusters and I’m more than happy to go into more detail about any step here if you have questions - just contact me.</div>
</div>
Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-71343465010556717472012-03-27T11:45:00.000+03:002012-03-29T00:02:28.156+03:00GeekOut 2012 14-15 June<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://geekout.ee/wp-content/uploads/2012/03/Geekout-invite.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="400" src="http://geekout.ee/wp-content/uploads/2012/03/Geekout-invite.jpg" width="400" /></a></div>
The GeekOut is back, now twice as long, informative, interesting and exiting! :D <a href="http://geekout.ee/register/" target="_blank">Registration has opened</a> - become an early geek and get the lowest price. Don't forget to <a href="http://geekout.ee/programme-2012/" target="_blank">check the programme out</a>.</div>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-67722498901726340922011-12-22T12:57:00.001+02:002011-12-22T12:57:42.871+02:00Stanford AI Class Finished<p>Stanford AI class is over. I was very interesting and I learned a lot new things. Unfortunately I sucked at the first 2 questions of final exam and my score is not so high as I would like it to be. But there will be new interesting courses in spring - I will take my "revenge" there. So far I have signed up to Machine Learning, Game Theory and Design and Analysis of Algorithms courses and I hope I will have enough time for that. :)</p><br />
<div class="separator" style="clear: both; text-align: center;"><a href="http://1.bp.blogspot.com/-g5duzA-oq0c/TvMMbvXLRRI/AAAAAAAADLs/LVDJCT0Pqpw/s1600/AI-Score.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://1.bp.blogspot.com/-g5duzA-oq0c/TvMMbvXLRRI/AAAAAAAADLs/LVDJCT0Pqpw/s320/AI-Score.png" width="247" /></a></div><br />Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com1tag:blogger.com,1999:blog-360329120074358364.post-82178839025750031982011-11-14T12:15:00.001+02:002011-11-14T12:44:25.701+02:00Java: Save InputStream Into FileImagine we need to save an InputStream into file. That can happen when requesting some url, or when just copying the file from one place to another on hard disk. There are a lot of answers provided by google on request <i>java save inputstream to file</i>. I have checked the first result page and everywhere almost 1 and the same solution is provided, which includes the following loop:<br />
<br />
<pre><code class="java">int read = 0;
byte[] bytes = new byte[1024];
while ((read = inputStream.read(bytes)) != -1) {
out.write(bytes, 0, read);
}</code></pre><br />
Seriously, guys, don't you think there is something wrong here? Even in C++ people do not copy streams by operating bytes anymore! There should be a lot better way :) (I am not considering now usage of any additional libraries that require some additional jar files).<br />
<br />
<pre><code class="java">import sun.misc.IOUtils;
new FileOutputStream("tmp.txt").write(IOUtils.readFully(inputStream, -1, false));</code></pre>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com3tag:blogger.com,1999:blog-360329120074358364.post-40010912662886789342011-08-24T13:03:00.000+03:002013-04-03T11:28:31.315+03:00Top 10 Jenkins Must-Have Plugins<div dir="ltr" style="text-align: left;" trbidi="on">
We at ZeroTurnaround have been using Jenkins for a long time already, and at last decided to create a <a class="external" href="http://www.zeroturnaround.com/blog/top-10-jenkins-featuresplugins/">small review of plugins and features</a> we use.</div>
Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-60471816799788467352011-06-10T22:02:00.002+03:002011-06-29T17:32:51.281+03:00Ruby Pack/Unpack<p>Having a party in ZeroTurnaround new office in Tartu. There is a mat on the floor near the entrance door that says:</p>
<pre><code>01010111011001010110110001100011011011110110110101100101</code></pre>
<p>Using ruby, we can quickly figure out, what that actually means:</p>
<pre><code class="ruby">["01010111011001010110110001100011011011110110110101100101"].pack('B*') #==> Welcome</code></pre>
<p><a class="external" href="http://www.codeweblog.com/ruby-string-pack-unpack-detailed-usage/">Ruby string pack unpack detailed usage.</a></p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-35104142073626090892011-06-09T23:01:00.005+03:002011-06-13T17:36:44.019+03:00GeekOut: The First Java Conference In Estonia<p>Today I have attended the first Java conference in Estonia: <a class="external" href="http://geekout.ee/">GeekOut</a>. That is also my first conference at all, as I have never participated earlier in such an event. Nipping on ahead I could say, that the activity was a success. It was good organized, informative, interesting, with great food and beer (although I do not drink alcohol :) ).</p>
<p>The day started with small introduction from <a class="external" href="http://www.ekabanov.net/">Jevgeni Kabanov</a>, the founder and CTO of <a class="external" href="http://www.zeroturnaround.com/">ZeroTurnaround</a>, the organizers of the GeekOut. After that the main part began.</p>
<p>The first talk was about Java 7 by <a class="external" href="http://martijnverburg.blogspot.com/">Martijn Verburg</a>. It was about the new features that are coming in JDK 7. <a class="external" href="http://openjdk.java.net/projects/coin/">Project Coin</a> announced:
<ul><li><a class="external" href="http://mail.openjdk.java.net/pipermail/coin-dev/2009-February/000001.html">Strings in switch</a></li>
<li><a class="external" href="http://mail.openjdk.java.net/pipermail/coin-dev/2009-February/000011.html">Automatic Resource Management</a> (such as c# <code>using</code> construction)</li>
<li>Numeric literals with underscores. (Like in ruby: 3_456_789)</li>
<li><a class="external" href="http://mail.openjdk.java.net/pipermail/coin-dev/2009-February/000009.html">Improved Type Inference for Generic Instance Creation</a><br/>We can skip generics on the right side: <pre><code>Map<String, List<String>> anagrams = new HashMap<>();</code></pre></li>
<li>New file handling mechanism, which "is finally done right, I hope" (M.V.)</li>
<li>Nonblocking I/O for sockets and files, new Files and Paths classes that provide some util functions to work with filelike objects, that are quicker than in JDK 6, multicatch and some others...</li>
</ul>
<p>Next speech was about "Riding the data tsunami and coming out on top" by <a class="external" href="http://www.codespot.net/blog/">Alex Snaps</a>. He introduced <a class="external" href="http://www.terracotta.org/company/?src=/products">Terracotta</a> - an open source solution for application scalability, availability and performance.</p>
<p>After a coffee break <a class="external" href="http://www.jole.fi/">Joonas Lehtinen</a> told us about <a class="external" href="http://vaadin.com/home">Vaadin}></a> - the web framework for java, that allows creating rich internet applications fast and without a line in javascript. Interesting fact is that Vaadin is actually "10 years old, but 21 months young" - as the framework exists already for a long time, but only recently it became popular. You can write everything in java, no javascript debugging, no html, and if you are writing in some other jvm language, then you don't need java too! Scala, Groovy, Jruby ... are all good. Also a nice coding session, where Joonas wrote a nice web application in 15 minutes! I liked it, very impressive.</p>
<p><a class="external" href="http://john-davies.blogspot.com/">John Davies</a> talked about integration of applications. Banking realities: FpML, 100-100000 messages/sec, latency critical - 10ms, xml is evil - no time for <>, network cards process the messages, not cpu - how the time is critical, 30PB of cache?! We also learned, why common message mapping is bad, when you have huge amount of data. That is converting to common message format and from it, when we can actually convert straight. What we need actually is to know how to convert each field of the message between POJO and particular format. XPath using in objects... and so on.</p>
<p>After the lunch noSql databases, particularly GraphDB called <a class="external" href="http://neo4j.org/">Neo4j</a> was covered by <a class="external" href="http://twitter.com/#!/peterneubauer">Peter Neubauer</a>. It was a kind of introductory presentation, showing the concepts of the database, with examples of searching and storing the data and also conditions in which GraphDBs can and should be used instead of Sql databases.</p>
<p>Jevgeni Kabanov showed problems that one may face doing the update of web application. For instance, the out of memory error problem arises because of previous version of application not being garbage collected, if there is a not matter how small leak, because it has the link to the classloader and it has the link to the previous version of application.</p>
<p>The last talk, presented once again by Martijn Verburg, was a relaxing one. Very funny, but at the same time serious. Have to figure out yourself, what qualities of a Diabolic Programmer are good, what not, and what good to some extent.</p>
<p>The closing of the GeekOut conference was in cafe in the same building. Free pizza and some drinks - nice ending of the good day. Next time I definitely will attend GeekOut again!</p>
<p>The Slides:</p>
<ul>
<li><a class="external" href="http://www.slideshare.net/martijnverburg/back-to-thefuturewithjava-7geekout">Back To The Future With Java 7 ~ Martijn Verburg</a></li>
<li><a class="external" href="http://www.slideshare.net/jojule/vaadin-rich-web-apps-in-serverside-java-without-plugins-or-javascript">Vaadin, Rich Web Apps in Server-Side Java without Plug-ins or JavaScript ~ Joonas Lehtinen</a>
<li><a class="external" href="http://www.infoq.com/presentations/Large-Scale-Integration-in-Financial-Services">Architecting For Enterprise Scale ~ John T. Davies</a> (Filmed on QCon)</li>
<li><a class="external" href="http://www.slideshare.net/peterneubauer/geekout-tallinn-neo4j-for-the-rescue">Neo4j For the Rescue ~ Peter Neubauer</a></li>
</ul>
<p>PS. I will list all the slides of the presentations here as soon as they appear.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-20039767144544405012011-05-11T11:16:00.000+03:002011-05-11T11:16:39.212+03:00Google Code Jam 2011 Qualification Round<p>This Saturday <a class="external" href="http://code.google.com/codejam/contest/dashboard?c=975485#">Google Code Jam 2011 Qualification Round</a> took place. I didn't have much time to spend on the problems, but still have solved a couple and proceeded to Round 1.</p>
<p>The ones that I solved are <a class="external" href="http://code.google.com/codejam/contest/dashboard?c=975485#s=p1">Magicka</a> and <a class="external" href="http://code.google.com/codejam/contest/dashboard?c=975485#s=p2">Candy Splitting</a>.
<h3>Magicka</h3>
<p>First I tried to solve it with some cunning string substitution involving regular expressions. But in the end the simple simulation did the trick. I actually was a little bit disappointed after the official solutions were announced, because I thought that there should be some trick in here.</p>
<pre><code class="ruby">#!/usr/bin/ruby
lines = ARGF.readlines
t = lines.first.to_i
(1..t).each do |test_id|
arr = lines[test_id].split " "
c = arr[0].to_i
combines = arr[1..c].inject({}){|memo, e| memo[e[0..1]]=e[2]; memo[e[1]+e[0]]=e[2]; memo }
d = arr[c+1].to_i
opposed = arr[c+2..c+1+d]
n = arr[c+2+d].to_i
elems = arr[c+3+d] #.split("").collect {|i| i.to_sym }
result = []
elems.split("").each do |elem|
result << elem
if result.length > 1
last2 = result[-2..-1].join
if combines.key?(last2)
result.pop(2)
result = result << combines[last2]
end
opposed.each do |o|
if result.include?(o[0]) && result.include?(o[1])
result = []
break
end
end
end
end
puts "Case ##{test_id}: [#{result.join(", ")}]"
end</code></pre>
<h3>Candy Splitting</h3>
<p>This one was a little bit more tricky and the author's solution is much more simpler and clear than mine. I made it using old school brute force, just like <a class="external" href="http://code.google.com/codejam/contest/dashboard?c=975485#s=p3&a=2">Goro</a> (strength is my strength :) )</p>
<pre><code class="ruby">#!/usr/bin/ruby
def s( candies, pile1, pile2 )
#p "c:#{candies}, 1:#{pile1}, 2:#{pile2}"
if candies.empty?
if !pile1.empty? && !pile2.empty? && pile1.inject(0){|memo, i| memo ^ i} == pile2.inject(0){|memo, i| memo ^ i}
return pile1
else
return false
end
end
x = candies.shift
p1 = pile1.clone
return s( candies, p1 << x, pile2) || s( candies, pile1, pile2 << x)
end
lines = ARGF.readlines
t = lines.first.to_i
(1..t).each do |test_id|
n_candies = lines[test_id*2-1]
candies = lines[test_id*2].split(" ").collect{|i| i.to_i }
r = s( candies.sort.reverse, [], [] )
r = !r ? "NO" : r.inject(:+)
puts "Case ##{test_id}: #{r}"
end</code></pre>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-44827991059751466222011-04-21T17:03:00.000+03:002011-04-21T17:03:39.229+03:00Mercurial In Ubuntu<p>Since I have moved to <a href="http://www.zeroturnaround.com/" class="external">ZeroTurnaround</a> I had to learn a lot of new and interesting things. Most of them are very simple to write about, but as time passes I finally thought up the things, I could share with you.</p>
<p>We use <a class="external" href="http://mercurial.selenic.com/">Mercurial</a> as SCM. So here is a small tip for ubuntu users how to configure hg.</p>
<p>Mercurial is available to install through repository</p>
<pre><code class="bash">$ sudo aptitude install mercurial</code></pre>
<p>So now as it is installed, we can configure it. Mercurial reads configuration data from several files, if they exist. To list them use:</p>
<pre><code class="bash">$ hg help config</code></pre>
<p>Now we will edit the <code>$HOME/.hgrc</code> file to add there some user information, authentication and enable plugins. A <a class="external" href="http://www.selenic.com/mercurial/hgrc.5.html">configuration file consists of sections</a>, led by a <code>[section]</code> header and followed by <code>name = value</code> entries (sometimes called <i>configuration keys</i>)</p>
<pre><code class="ini">[ui]
username = Firstname Lastname <firstname.lastname@example.net></code></pre>
<p>Here we introduced ourselves. The section is <code><a class="external" href="http://www.selenic.com/mercurial/hgrc.5.html#ui">[ui]</a></code> and ui.username is typically a person's name and email address. If this field is not declared in hgrc, you will have to enter it manually every time you commit.</p>
<pre><code class="ini">[auth]
foo.prefix=*
foo.username=firstname.lastname@example.net
foo.password=SecretPassword123</code></pre>
<p>The <code><a class="external" href="http://www.selenic.com/mercurial/hgrc.5.html#auth">[auth]</a></code> section allows you to store credentials for http authentication, I hate entering username and password every time I pull or push. You can store several different credentials for different servers here. They will be grouped by the part before the . (full stop, <i>foo</i> - in my example). And the auth.prefix will determine, which credentials should be used for what http address. It should be either * or a URI with or without the scheme part. In my example 1 and the same credentials are used for all addresses.</p>
<pre><code class="ini">[extensions]
fetch =
color =
#this extension will get loaded from the file specified
myfeature = ~/.hgext/myfeature.py</code></pre>
<p>Mercurial has an extension mechanism for adding new features. To enable an extension, create an entry for it in <code><a class="external" href="http://www.selenic.com/mercurial/hgrc.5.html#extensions">[extensions]</a></code> section. There is a <a class="external" href="http://mercurial.selenic.com/wiki/UsingExtensions">list of extensions</a> for Mercurial and you can <a class="external" href="http://mercurial.selenic.com/wiki/WritingExtensions">write one yourself</a>. Extensions bundled with Mercurial does not need anything after the <i>equals</i> sign, but you need to provide the full path to others.</p>
<p>Now that we have configured it a little bit, we can start using. <a class="external" href="http://hginit.com/">Mercurial tutorial</a> by <a class="external" href="http://www.joelonsoftware.com/">Joel Spolsky</a> will teach you, how.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-69717711432799555932011-01-14T16:38:00.001+02:002011-01-15T17:29:19.953+02:00Hello, ZeroTurnaround!<p>A week ago I have left Axinom. Now I work in <a class="external" href="http://www.zeroturnaround.com/">ZeroTurnaround</a> - the famous home of <a class="external" href="http://www.zeroturnaround.com/jrebel/">JRebel</a>. I don't want to write about the causes that made me to do it. (Those who interested may ask in private.) The only thing I can say, that I am definitely glad :) I did it.</p>
<p>This job switch also means change of technologies I will be working with. During this week I recalled a little bit of Java and got acquainted with Maven, Ant, Hudson and Selenium (not java specific, but still something new). So look for Java related posts in the near future.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-9719239576136961842010-12-31T10:44:00.000+02:002010-12-31T10:44:00.940+02:00Happy New Year!<div class="separator" style="clear: both; text-align: center;">
<a href="http://1.bp.blogspot.com/_13GxZrI8up4/TR2XXkrPWqI/AAAAAAAAC_g/c5MnrwSyn6Y/s1600/tree.png" imageanchor="1" style="margin-left:1em; margin-right:1em"><img border="0" height="290" width="400" src="http://1.bp.blogspot.com/_13GxZrI8up4/TR2XXkrPWqI/AAAAAAAAC_g/c5MnrwSyn6Y/s400/tree.png" /></a></div>
<p>The <a href="http://xkcd.com/835/">image is not mine</a>, but I really liked it. Happy New Year to All of you!</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-89487875553333330202010-12-20T15:50:00.000+02:002010-12-20T15:50:16.006+02:00Automatic Update Of Moles References To Other Solution Projects<p>The situation: I have a solution with several projects in it. One of them is UnitTest project, that uses Moles for mocking. And it has referenses to other projecs in solution as well as to their moled dlls. Every time I change code in my projects and rebuild them, the UnitTest project loses the references to moled dlls, as they allready have other versions. I see that Moles builds new versions of projects' dlls, but it does not update references. Why?</p>
<p>RTFM, Juri.</p>
<p>From the <a class="external" href="http://research.microsoft.com/en-us/projects/pex/molesmanual.pdf">Microsoft Moles Reference Manual (page 12)</a></p>
<blockquote><i>The Moles framework for Visual Studio monitors build events and automatically forces the regeneration of stub types for projects that have been updated. For efficiency
reasons, this automatic update only works for <code>.moles</code> files that are in the <b>top level
folder of the project</b>, to avoid walking large nested project structures. This automatic
update creates a smooth experience when writing code in a test-driven development
style.</i></blockquote>
<p>So the automatic update of references works only for <code>.moles</code> files that are in the same folder with the project file. But I have moved all the <code>.moles</code> files into separate folder.</p>
<p>Now that I moved all the <code>.moles</code> files back, it is working.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-20790755768529945142010-12-17T23:10:00.001+02:002010-12-17T23:13:54.732+02:00Solution: Eee Pc Ubuntu 10.10 Wireless Connection Fail After Resume From Hibernate<p>After upgrading your Ubuntu to Maverick, you may have experienced some problems using wireless networks. That is when your computer resumes from hibernate, it is no longer able to connect to any wireless network and only a restart will make it do it again. Also some random disconnects from wireless occur. That concerns not only Eee Pc (I have a 1000h), but I believe every computer that uses RaLink network controller. First, check if it is you :)</p>
<pre><code class="bash">$ lspci -k|grep -i network --after-context 3
03:00.0 Network controller: RaLink RT2860
Subsystem: Foxconn International, Inc. Device e002
Kernel driver in use: rt2800pci
Kernel modules: rt2800pci, rt2860sta</code></pre>
<p>With Ubuntu 10.10 some hardware that was previously driven by the rt2860sta driver is now driven by default by the rt2800pci driver. Sometimes the new rt2800pci does not work as well as the rt2860sta. In that case it is often possible to switch back by blacklisting. As we already saw, we have both drivers installed and the pci one in use. Now we will create a text file that will allow us to easily switch between the drivers.</p>
<pre><code class="bash">sudo gedit /etc/modprobe.d/blacklist-wlan.conf</code></pre>
<p>Copy these 2 lines into the newly created file.</p>
<pre><code>blacklist rt2800pci
#install rt2860sta /bin/false</code></pre>
<p>And save. After the reboot your computer will use the rt2860sta driver. If you want to switch back to the rt2800pci driver, just comment the first line, uncomment the second and reboot.</p>
<p>PS. Solution was found <a href="https://answers.launchpad.net/ubuntu/+question/132350" class="external">here</a></p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com2Tallinn, Estonia59.4388619 24.754471559.2642994 24.2875525 59.6134244 25.221390500000002tag:blogger.com,1999:blog-360329120074358364.post-38053408552885495112010-12-14T22:59:00.000+02:002010-12-14T22:59:03.579+02:00SICP: The 8 queens puzzle<p>The Google AI Contest is over and now I continue to read the SICP book. Now we are solving the famous <a href="http://en.wikipedia.org/wiki/Eight_queens_puzzle" class="external">8 queens puzzle</a>. One way to solve the puzzle is to work across the board, placing a queen in each column. Once we have placed k - 1 queens, we must place the k<sup>th</sup> queen in a position where it does not check any of the queens already on the board. We can formulate this approach recursively: Assume that we have already generated the sequence of all possible ways to place k - 1 queens in the first k - 1 columns of the board. For each of these ways, generate an extended set of positions by placing a queen in each row of the kth column. Now filter these, keeping only the positions for which the queen in the kth column is safe with respect to the other queens. This produces the sequence of all ways to place k queens in the first k columns. By continuing this process, we will produce not only one solution, but all solutions to the puzzle. We implement this solution as a procedure queens, which returns a sequence of all solutions to the problem of placing n queens on an n×n chessboard.</p>
<p>In the beginning we are given this procedure.</p>
<pre><code class="lisp">(define (queens board-size)
(define (queen-cols k)
(if (= k 0)
(list empty-board)
(filter
(lambda (positions) (safe? k positions))
(flatmap
(lambda (rest-of-queens)
(map (lambda (new-row)
(adjoin-position new-row k rest-of-queens))
(enumerate-interval 1 board-size)))
(queen-cols (- k 1))))))
(queen-cols board-size))
</code></pre>
<p>and we need to write all the sub-procedures that are used by the main one. Let's start from the simplest one.</p>
<pre><code class="lisp">(define empty-board '())</code></pre>
<p>Next task is to write the <code>safe?</code> function. This should determine for a set of positions, whether the queen in the k<sup>th</sup> column is safe with respect to the others. I decided to split it into three procedures. 1 checks the horizontal line, one diagonal up and the other diagonal down. We don't actually need the k argument, as we should always check the last queen only. I use reversed-positions as we are starting from the last queen and going to the first one.</p>
<pre><code class="lisp">(define (safe? k positions)
(let ((reversed-positions (reverse positions))
(last (car (reverse positions))))
(and (horizontal-safe? last (cdr reversed-positions))
(diagonal-up-safe? last (cdr reversed-positions))
(diagonal-down-safe? last (cdr reversed-positions)))))
(define (horizontal-safe? q positions)
(if (null? positions)
true
(and (not (= (car positions) q))
(horizontal-safe? q (cdr positions)))))
(define (diagonal-up-safe? q reversed-positions)
(if (null? reversed-positions)
true
(and (not(= (car reversed-positions) (+ q 1)))
(diagonal-up-safe? (+ q 1) (cdr reversed-positions)))))
(define (diagonal-down-safe? q reversed-positions)
(if (null? reversed-positions)
true
(and (not(= (car reversed-positions) (- q 1)))
(diagonal-down-safe? (- q 1) (cdr reversed-positions)))))</code></pre>
<p>The remaining procedures are much easier to write. <code>flatmap</code> should join all the lists, that represent queens on board, into one big list. <code>enumerate-interval</code> just returns a list with values between <code>low</code> and <code>high</code>. <code>adjoin-position</code> just adds new queen to the existing ones.</p>
<pre><code class="lisp">(define (flatmap proc lst)
(foldr append '() (map proc lst)))
(define (enumerate-interval low high)
(if (> low high)
'()
(append (list low) (enumerate-interval (+ low 1) high))))
(define (adjoin-position new-row k rest-of-queens)
(append rest-of-queens (list new-row)))</code></pre>
<p>As you can see, there are some parameters in procedures that are not used. They can be safely removed.</p>
<pre><code>(queens 8)</code></pre>
<p>Returns 92 solutions - which is right.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com2tag:blogger.com,1999:blog-360329120074358364.post-19078877324692418232010-12-05T11:55:00.000+02:002010-12-05T11:55:48.601+02:00Understanding Pac-Man Ghost Behavior<p>A very <a href="http://gameinternals.com/post/2072558330/understanding-pac-man-ghost-behavior" class="external">interesting article</a> about the ghost AI in the world famous game.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-41931888410049732932010-12-04T12:08:00.002+02:002011-11-22T15:51:42.840+02:00Google AI Challenge. Final.<p>So the <a href="http://planetwars.aichallenge.org" class="external">Google AI Contest</a> is over. I have finished in <a href="http://planetwars.aichallenge.org/profile.php?user_id=10047" class="external">150 place</a>, which is good, but could be better :).</p><p>The winner - <a href="http://planetwars.aichallenge.org/profile.php?user_id=8565" class="external">bocsimacko</a> - has <a href="http://quotenil.com/Planet-Wars-Post-Mortem.html" class="external">shared his algorithm and source code in his blog.</a> The same was done by the runner up - <a href="http://planetwars.aichallenge.org/profile.php?user_id=7026" class="external">_iouri_</a> - <a href="http://iouri-khramtsov.blogspot.com/2010/11/google-ai-challenge-planet-wars-entry.html" class="external">here</a>. As the two best algorithms are revealed, I will not bother with revealing my :) and start with homework - now the task is to go through at least one of them and understand all the ideas discovered and implemented.</p><p>I am looking forward to the next Google AI Challenge and this time I will prepare thoroughly to get the high place in it.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0tag:blogger.com,1999:blog-360329120074358364.post-11552871336914822762010-09-27T15:10:00.001+03:002010-09-27T16:01:16.234+03:00Google AI Challenge. Continued.<p>Yesterday's 6 hours of coding and implementing the new version of my bot, that killed the previous one on every of 100 testing maps, are not wasted - I am on the 14<sup>th</sup> place with ELO 3335.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com2tag:blogger.com,1999:blog-360329120074358364.post-64918567529228080822010-09-24T13:37:00.000+03:002010-09-24T13:37:47.614+03:00Google AI Challenge<p>University of Waterloo Computer Science Club organized <a href="http://ai-contest.com/">AI Contest</a>, sponsored by Google. Contestants are asked to create a bot that plays PlanetWars game, which is based on <a href="http://www.galcon.com/">GalCon</a>. The game field consists of several planets, that are occupied by one of the players or neutral. Planets produce ships (bigger planets do it quicker) players use to conquer new planets. The goal is to beat the other player.</p>
<center><a href="http://1.bp.blogspot.com/_13GxZrI8up4/TJx9pOjzPuI/AAAAAAAAC_A/LtR9Xi00h3E/s1600/PlanetWars.png" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" height="640" src="http://1.bp.blogspot.com/_13GxZrI8up4/TJx9pOjzPuI/AAAAAAAAC_A/LtR9Xi00h3E/s640/PlanetWars.png" width="616" /></a></center>
<p>I am also taking part in the tournament and, francly speaking, doing well. Currently I am around 200th place (username: <a href="http://ai-contest.com/profile.php?user_id=8308">2stupidogs</a>) in total ranking and one of the best in Estonia. Here are some strategy thoughts I can share with you as the starting point. These were used by my first 2 bots. (Now I have implemented a more deeper algorithm.) The organisers provide everyone with default strategy bot, that can we further improved. And by making the following small improvements you can make top 500 easily.</p>
<p>Default bot finds his strongest planet and sends half of the fleet from it to the near planet, that it considers to be weak.</p>
<ol>
<li>First change is to send as many ships as needed to conquer the planet, not just the half.</li>
<li>Then change the planet score formula (Determines how weak enemy planet is). It should depend on distanse, growth rate and current fleet on the planet. For example, count in how many days the new planet will regenerate the fleet you had to use to conquer it. The less it takes, the better the planet for you.</li>
<li>Don't send fleet to the planet where is was sent already. (Well, this is doubtable, but you can try.)</li>
<li>Make a list of weakest planets for your strongest planet (not just the one weakest planet), and send a fleet to all of them as soon as you have enough ships.</li>
<li>Make a list of your strongest planets and look for weakest planets for every of them.</li>
</ol>
<p>Just implementing this strategy I managed to make into top 350 list, but it was less contestants then.</p>Jurihttp://www.blogger.com/profile/01376742827133744974noreply@blogger.com0