Check out Dinevore if you’re a foodie who is a techie. Their API is now live!
Contact them via twitter or contact their team via email.
Check out Dinevore if you’re a foodie who is a techie. Their API is now live!
Contact them via twitter or contact their team via email.
What are the limits of expressing thoughts in Twitter?
Here’s a powerful but inefficient (when run) thought that can be expressed on Twitter, a quick sort in Erlang in 126 characters.
A lot of Perl one-liners can fit into a tweet – powerful and useful ones.
Haikus can be expressed in a tweet.
The answer to the question, “What form of body language do most FBI interrogators consider to be the most telling?” can be answered in a tweet.
A marriage proposal can be answered in a tweet.
You can propose the concept of a hash tag in a tweet:
However, there are many thoughts that seem to be difficult to fit into a tweet:
Twitter encourages the laconic expression of thought which means plenty of affirmations, aphorisms, insults, congratulations, and reminders that can display any combination of sharp wit, pointed humor, and succinctness of expression. The mot juste becomes very important with the constraint of 140 characters.
There’s a pretty useful spreadsheet comparing different URL shorteners here:
http://spreadsheets.google.com/pub?key=pApF4slh39ZkqUOoZQSo8bg .
Tinyurl just really shortens your URL and doesn’t provide any other data. A great feature is the preview option that allows you to preview a link so you don’t get Rick-rolled.
Bit.ly is my favorite service. The features I like are
I wanted to figure out just how much effort it was to code a URL so I coded up my own MVC based on Rasmus’ article on making one in PHP, and added URL shortening code to it. You can get the URL shortener I wrote on Github.
Here are a few things that I noticed once I put this code on Seductive.me:
Just notes for myself on adding more MySQL databases without shutting down the master database.
on existing slave:
copy data dir from /var/lib/mysql and data from /var/run/mysqld to new slave database:
copy /etc/my.cnf from old slave to new slave
add entry for new server-id
start existing slave:
start new slave:
on masterdb:
e.g.:
test on master:
create database repl;
check on slave:
show databases; /* should show new database */
test on master:
drop database repl;
check on slave:
show databases; /* new database should be dropped */
Now it’s time to turn this into an automated shell script with Expect in there.
Bad things happen:
My Twitter page @mostlylisa has been hacked and deleted. It’s GONE!!! I am currently catatonic. Please help me restore my account, it’s like, my meaning in life.
Much love to whom ever helps me!
PS. If you miss me like I miss you, you can always be my Friend OR Fan on Facebook. I know it’s not the same, but it’s all I have now. *hold me*
It only took twitter about 3 days to recover from this.
Is there a faster way?
First let’s look at the current options:
BackupMyTweets required too much info to get it working. No, you cannot have my gmail password.
I’ve tried Tweetbackup and they get kudos for using OAuth to make it easy to back your tweets up.
The 3rd option, begging Twitter, simply can’t scale and will only work for those few elites close to Twitter or popular enough. There isn’t a consumer solution.
How do we solve the problem of social media backup?
The great thing is the problem is:
Once again, if you haven’t already, use BackUpMyTweets.
I’m not comparing apples to apples yet… but out of the box, drizzle does inserts faster than MySQL using the same table type, InnoDB.
Here’s what I’m comparing:
drizzle r1126 configured with defaults, and
MySQL 5.1.38 configured with
./configure --prefix=/usr/local/mysql --with-extra-charsets=complex \ --enable-thread-safe-client --enable-local-infile --enable-shared \ --with-plugins=partition,innobase
which is really nothing complicated.
SQL query caching is turned off on both database servers. Both are using the InnoDB engine plug-in.
I’m running these benchmarks on a MacBook Pro 2.4 GHz Intel Core 2 Duo with 2GB 1067 MHz DDR3 RAM.
I wrote benchmarking software about 2 years ago to test partitions but I’ve since abstracted the code to be database agnostic.
You can get the benchmarking code at Github.
At the command-line, you type:
where 10000 is the number of rows allocated total, and 4 is the number of partitions for those rows.
You can type the same thing for mysql:
and get interesting results.
Here’s what I got:
bash-3.2$ php build_tables.php 10000 4 mysql Elapsed time between Start and Test_Code_Partition: 13.856538 last table for php partition: users_03 Elapsed time between No_Partition and Code_Partition: 14.740206 ------------------------------------------------------------- marker time index ex time perct ------------------------------------------------------------- Start 1252376759.26094100 - 0.00% ------------------------------------------------------------- No_Partition 1252376773.11747900 13.856538 48.45% ------------------------------------------------------------- Code_Partition 1252376787.85768500 14.740206 51.54% ------------------------------------------------------------- Stop 1252376787.85815000 0.000465 0.00% ------------------------------------------------------------- total - 28.597209 100.00% ------------------------------------------------------------- 20000 rows inserted...
bash-3.2$ php build_tables.php 10000 4 drizzle Elapsed time between Start and Test_Code_Partition: 7.502141 last table for php partition: users_03 Elapsed time between No_Partition and Code_Partition: 7.072367 ------------------------------------------------------------- marker time index ex time perct ------------------------------------------------------------- Start 1252376733.68141500 - 0.00% ------------------------------------------------------------- No_Partition 1252376741.18355600 7.502141 51.47% ------------------------------------------------------------- Code_Partition 1252376748.25592300 7.072367 48.52% ------------------------------------------------------------- Stop 1252376748.25627400 0.000351 0.00% ------------------------------------------------------------- total - 14.574859 100.00% ------------------------------------------------------------- 20000 rows inserted...
MySQL: 699 inserts per second
drizzle: 1372 inserts per second
As far as inserts go, drizzle is about 2 times faster out of the box than MySQL.
I’ve been using j.mp for two weeks now and it’s filled the void that tr.im left after going out of business.
Long Live j.mp.
This blog post is a quick introduction to load balancing and auto scaling on with Amazon’s EC2.
I was kinda amazed about how easy it was.
Prelims: Download the load balancer API software, auto scaling software, and cloud watch software. You can get all three at a download page on Amazon.
Let’s load balancer two servers.
elb-create-lb lb-example --headers \ --listener "lb-port=80,instance-port=80,protocol=http" \ --availability-zones us-east-1a
The above creates a load balancer called “lb-example,” and will load balance traffic on port 80, i.e. the web pages that you serve.
To attach specific servers to the load balancer you just type:
elb-register-instances-with-lb lb-example --headers \ --instances i-example,i-example2
where i-example and i-example2 are the instance id’s of the servers you want added to the load balancer.
You’ll also want to monitor the health of the load balanced servers, so please add a health check:
elb-configure-healthcheck lb-example --headers \ --target "HTTP:80/index.html" --interval 30 --timeout 3 \ --unhealthy-threshold 2 --healthy-threshold 2
Now let’s set up autoscaling:
as-create-launch-config example3autoscale --image-id ami-mydefaultami \ --instance-type m1.small
as-create-auto-scaling-group example3autoscalegroup \ --launch-configuration example3autoscale \ --availability-zones us-east-1a \ --min-size 2 --max-size 20 \ --load-balancers lb-example
as-create-or-update-trigger example3trigger \ --auto-scaling-group example3autoscalegroup --namespace "AWS/EC2" \ --measure CPUUtlization --statistic Average \ --dimensions "AutoScalingGroupName=example3autoscalegroup" \ --period 60 --lower-threshold 20 --upper-threshold 40 \ --lower-breach-increment=-1 --upper-breach-increment 1 \ --breach-duration 120
With the 3 commands above I’ve created an auto-scaling scenario where a new server is spawned and added to the load balancer every two minutes if the CPU Utilization is above 20% for more than 1 minute.
Ideally you want to set –lower-threshold to something high like 70 and –upper-threshold to 90, but I set both to 20 and 40 respectively just to be able to test.
I tested using siege.
Caveats: the auto-termination part is buggy, or simply didn’t work. As the load went down, the number of the server on-line remained the same. Anybody have thoughts on this?
What does auto-scaling and load balancing in the cloud mean? Well, the total cost of ownership for scalable, enterprise infrastructure just went down by lots. It also means that IT departments can just hire a cloud expert and deploy solutions from a single laptop instead of having to figure out the cost for hardware load balancers and physical servers.
The age of Just-In-Time IT just got ushered in with auto-scaling and load balancing in the cloud.
If you don’t fail fast enough, you’re on the slow road to success.
One idea that I recently failed was using a screen and sitebeagle to monitor sites.
It’s not a complete failure… it works okay.
Due to budget constraints, I put my screen and sitebeagle set up on a production server.
For some reason that production server ran out of space and became unresponsive. Screen no doubt caused this. I was alerted of the issue and did a reboot.
After the reboot, although Amazon’s monitoring tools told me the server was okay, the server was not. The MySQL database was in an EBS volume and needed to be re-mounted.
The solution I now have in place is still screen and sitebeagle. But I use another server with screen and sitebeagle on it to monitor the production server that gave me the issue in the first place.
It’s a question of who will monitor the monitors… in a world of web sites with few site users the answers pretty bleak. In the world of super popular commercial sites, the answer’s clear. The wisdom of crowds will monitor the web sites.
I recently created a cross platform browser, Windows 2003 EC2 AMI: ami-69739500
It has the following pre-installed:
Pretty much with that list you’re all set to do troubleshooting for cross platform browser issues.
There’s IIS 6.0 and SQL Server, too.
I’ve linked the password to this ami at http://www.codebelay.com/ami-69739500.txt . It’s a short-coming of Windows AMIs on EC2 that I have to link the password, so please change it once you get into the instance.