Unishire The Weave

In this post I am going to tell you about Unishire The Weave, which is an upcoming apartment project from Unishire Realtors Pvt. Ltd. It is situated off Thanisandra Road in Bangalore and my experience in booking the same.

I recently booked a 2BHK apartment in the above mentioned apartment. The deciding factor for me was the payment scheme.

There was a 10% downpayment scheme with the rest of 10% payable in two installments of 5% each payable in October 2014 and July 2015. Since I had booked in October, I ended up paying 15% as downpayment, but still the ability to pay the remaining 5% in July 2015 was a big plus. The downside being the possession of this project is scheduled to be late 2017 and I dont expect it to be ready before 2018. That too being optimistic.

As of date they have approvals only upto fourth floor. The rest of the approvals should be available by January 2015 end. (The earlier quoted date was Nov-Dec 2014 for rest of approvals).

The Unishire group is presently actively working upon its other two projects in Thannisandra, so guess they will only put complete dedication to this project once the other two are done.

I booked at the rate of 3750/- sq feet and have applied for home loan from HDFC.

Overall my experience with dealing the sales people has been nice and the concerned HDFC sales person also has been a pleasure to deal with. All paper work was done at my home/office.

This is the plot of the project as on Oct 19, 2014. Unishire The Weave

 

And here is the google maps location https://goo.gl/maps/3dUTI

Update: Oct 25, 2014 : Got a call from the Sales guy saying management is recommending ICICI since HDFC has apparently not released the money so far for the project. They will confirm to me on monday if I should go over to ICICI. I will go only if ICICI gives me zero processing fee since I have already paid the processing fee to HDFC (and the loan is in final stages of approval).

Yahoo! India october 2014 layoff..

So by now you must have heard about the Yahoo! India layoff of October 2014 where they are “laying” off everyone but the operations. Well, here is what is happening:

Most of the development teams have already been shifted to US. Most of the remaining development teams in India have been given an offer to move to US. And most of them have accepted the offer. In all about 600 developers have been asked to leave while about 300 of the remaining folks have been given an offer for US relocation.

The folks who have been given US offer are mostly being offered a four months severance package if they choose not to accept the offer. Other folks who have not been given an offer to move have been given similar severance package. The severance package differs across teams and ranks.

The operations team (what Yahoo! used to call Service Engineering Organisation) which falls in the realm of the now famous devops have mostly not been given an offer to move nor have they been asked to leave. However they have not been offered any retention package either, showing that the company is not too keen to keep them around but rather would be glad if they leave on their own (and most of them are already planning to).

The day started with a mail calling everyone to a meeting in the morning. All of the managers were already briefed about the move earlier in the day (previous evening?). The individuals were intimated about the offer for them (whether they are being given an offer to move or whether they are being let go) separately.

As per an official email from Yahoo! to media..

“As we ensure that Yahoo is on a path of sustainable growth, we’re looking at ways to achieve greater efficiency, collaboration and innovation across our business,” Prachi Singh, Manager and Lead, Corporate Communications, Yahoo (India) said in an email statement. “To this effect, we’re making some changes to the way we operate in Bangalore leading to consolidation of certain teams into fewer offices. Yahoo will continue to have a presence in India and Bangalore remains an important office.”

The most important lesson out of this change has been the missing leaks. Its been a while since any leaks of such news have come from Yahoo!. There was once a time when Yahoo!s would read a news outside first before hearing about it internally. That has definitely changed.

Atleast something is going good for Yahoo!s.

 

Linkedin down?

Looks like @ Sun Sep 28 21:27:08 UTC 2014 , Linkedin is down on the web with infinite redirect loop. The problem exists both while logged in and non-logged in.

Going by the looks of it, a new home deployment might have caused a faulty redirect. Surprisingly I dont see any other reports on twitter at this time for a linkedin failure. Is it just me?

 

Screen Shot 2014-09-29 at 2.59.49 AM

Anshu-MacBook-Pro:~ anshup$ curl -I -L -s https://www.linkedin.com/
HTTP/1.1 301 Moved Permanently
Server: Apache-Coyote/1.1
P3P: CP=”CAO CUR ADM DEV PSA PSD OUR”
Location: https://www.linkedin.com
Content-Language: en-US
Content-Length: 0
Vary: Accept-Encoding
Date: Sun, 28 Sep 2014 21:29:06 GMT
X-FS-UUID: 18019603b8379813f0ea79a8782b0000
X-LI-UUID: GAGWA7g3mBPw6nmoeCsAAA==
X-Li-Fabric: prod-lva1
Set-Cookie: _lipt=deleteMe; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: leo_auth_token=”GST:UvElobHw83UX24kTRj2lklHVoQQXXlf0Rt2k8itBKk8GG3AQzMhbhP:1411939746:245dd3572340239e71357eef1895a64caf71ab98″; Version=1; Max-Age=1799; Expires=Sun, 28-Sep-2014 21:59:05 GMT; Path=/
Set-Cookie: sl=”delete me”; Version=1; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: sl=”delete me”; Version=1; Domain=.www.linkedin.com; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: s_leo_auth_token=”delete me”; Version=1; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: JSESSIONID=”ajax:3378763733454792746″; Version=1; Domain=.www.linkedin.com; Path=/
Set-Cookie: visit=”v=1&G”; Version=1; Max-Age=63072000; Expires=Tue, 27-Sep-2016 21:29:06 GMT; Path=/
Set-Cookie: lang=”v=2&lang=en-us”; Version=1; Domain=linkedin.com; Path=/
Set-Cookie: lang=”v=2&lang=en-us”; Version=1; Domain=linkedin.com; Path=/
Set-Cookie: bcookie=”v=2&71ea2c02-5df9-45f7-8db5-5d24119e31f2″; domain=.linkedin.com; Path=/; Expires=Wed, 28-Sep-2016 09:06:38 GMT
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store
Connection: keep-alive
X-Li-Pop: PROD-IDB2
Set-Cookie: lidc=”b=VB78:g=109:u=1:i=1411939746:t=1412026146:s=3048875758″; Expires=Mon, 29 Sep 2014 21:29:06 GMT; domain=.linkedin.com; Path=/

HTTP/1.1 301 Moved Permanently
Server: Apache-Coyote/1.1
P3P: CP=”CAO CUR ADM DEV PSA PSD OUR”
Location: https://www.linkedin.com
Content-Language: en-US
Content-Length: 0
Vary: Accept-Encoding
Date: Sun, 28 Sep 2014 21:29:07 GMT
X-FS-UUID: 928ba61ab8379813e0dba9dcca2a0000
X-LI-UUID: koumGrg3mBPg26ncyioAAA==
X-Li-Fabric: prod-lva1
Set-Cookie: _lipt=deleteMe; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: leo_auth_token=”GST:UicjR0NBQqxraRmVAJE3SkdicNoyfQnr54E9IJoJMVdMn8mYU3pN9c:1411939747:93cea2e9d32d7f3b4c65a4a754d58ec46ee426f0″; Version=1; Max-Age=1799; Expires=Sun, 28-Sep-2014 21:59:06 GMT; Path=/
Set-Cookie: sl=”delete me”; Version=1; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: sl=”delete me”; Version=1; Domain=.www.linkedin.com; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: s_leo_auth_token=”delete me”; Version=1; Max-Age=0; Expires=Thu, 01-Jan-1970 00:00:10 GMT; Path=/
Set-Cookie: JSESSIONID=”ajax:7472706816298936279″; Version=1; Domain=.www.linkedin.com; Path=/
Set-Cookie: visit=”v=1&G”; Version=1; Max-Age=63072000; Expires=Tue, 27-Sep-2016 21:29:07 GMT; Path=/
Set-Cookie: lang=”v=2&lang=en-us”; Version=1; Domain=linkedin.com; Path=/
Set-Cookie: lang=”v=2&lang=en-us”; Version=1; Domain=linkedin.com; Path=/
Set-Cookie: bcookie=”v=2&de6e7146-727b-4c72-82fd-c0c5b062d0b3″; domain=.linkedin.com; Path=/; Expires=Wed, 28-Sep-2016 09:06:39 GMT
Pragma: no-cache
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Cache-Control: no-cache, no-store
Connection: keep-alive
X-Li-Pop: PROD-IDB2
Set-Cookie: lidc=”b=VB78:g=109:u=1:i=1411939747:t=1412026147:s=3097703888″; Expires=Mon, 29 Sep 2014 21:29:07 GMT; domain=.linkedin.com; Path=/

Fedora EC2 HVM AMI

In this blog post, I am going to tell you how to setup a Fedora HVM Image from the official Fedora PV Image on Amazon EC2. In general, this basically covers how to convert a PV image into a HVM image for AWS EC2. This works for fedora, but might not work for other OSes.

I assume you know about AWS, EC2, AMI, HVM and PV. Amazon has been steadily pushing towards using HVM. With the latest round of launches on July 1, 2014, Amazon now shows only HVM Images by default when you go to launch an instance. You now have to search for PV images. One benefit of using HVM images is better access to underlying hardware resources which gives benefits such as enhanced networking.

Fedora has official Amazon AWS EC2 AMIs available at http://cloud.fedoraproject.org/. However presently it only has para-virtualized (PV) Images.

I have been working extensively on AWS EC2 for the last few weeks and have realized that for best performance, we should be using HVM images.

For this particular project, I was interested in the multiqueue block layer, which was introduced in kernel 3.13.

The first step is to spin up an instance from the existing PV AMI. Its not completely necessary, since you just need the snapshot of the AMI. However I created an instance as I needed to make some changes to the image. The existing AMI available from Fedora has Kernel 3.10. So, I had to do a yum upgrade to get the latest available kernel 3.15.

After launching the instance from PV and making changes as per your needs (in my case, sudo yum upgrade -y), create a new AMI using AWS tools or web console, whatever way you are comfortable with.

For the rest of the steps, you need to setup EC2 API tools or the EC2 Command line Tools. I use EC2 CLI Tools.

After the AMI is ready, find the snapshot id used by the above AMI under EC2 > Elastic Block Store > Snapshot in EC2 Console.

or if you have the ec2 api tools setup:

ec2-describe-images ami-id_of_above_created_ami

and find the snapshot id for the ami. It would be something like snap-a12b34cd .

Once you have the snapshot id, you can register a new AMI using the above snapshot.

To register a new HVM AMI using the above snapshot, you need to use the cli/api tools since AWS still doesn’t have this in the webconsole (it might come soon).

ec2-register -a x86_64 -d '3.15.7-200.fc20.x86_64' -n 'Fedora_20_HVM_AMI' --sriov simple --virtualization-type hvm -s snap-b44feb18 --root-device-name /dev/sda1

where

-d is AMI description
-n is AMI name
-s is snapshot id from step 3.
-a is architecture
–virtualization-type is required for making it hvm
–sriov is for enabling enhanced networking , though it might be redundant, not sure.

This would register/create a new HVM AMI based on the snapshot created by the PV Image.

So, in this blog, we discussed how you can convert an existing PV AMI into an HVM AMI, specifically, how to create a fedora HVM AMI from the official PV AMI.

PS: I have made my Fedora HVM AMI public in the east region. So just search for Fedora and you will find it. Feel free to create AMIs of your own and/or copy it over to other regions.

PPS: Wanna know which cool place do I work where we end up playing with the state of the art latest technologies, be it kernels or the latest SSDs? Hit over to http://aerospike.com/careers to join the team!

How to earn Fedora Badges?

Fedora recently launched https://badges.fedoraproject.org, a recognition system that awards badges based upon certain activities that you do within the Fedora Infrastructure Environment.

I have recently been working with the Fedora Infrastructure and came to know about the badges. Needless to say I was excited and wanted some of my own.

First step to be a part of the Fedora infrastructure is to have a Fedora Account System account. You can signup for it at http://admin.fedoraproject.org/accounts/.

Once you have created your account, you should add a secret question to your account. This will earn you https://badges.fedoraproject.org/badge/riddle-me-this.

https://badges.fedoraproject.org/pngs/fas-riddle-me-this.png

Adding your timezone to your account profile earns you the

https://badges.fedoraproject.org/badge/white-rabbit

https://badges.fedoraproject.org/pngs/fas-white-rabbit.png

By adding your ssh or GPG key to your account, you can earn the https://badges.fedoraproject.org/badge/crypto-panda

https://badges.fedoraproject.org/panda/fas-crypto-panda.png

Accepting the FPCA (Fedora Project Contributor Agreement) earns you the https://badges.fedoraproject.org/badge/involvement

https://badges.fedoraproject.org/pngs/involvement.png

To earn the https://badges.fedoraproject.org/badge/let-me-introduce-myself , you need to create your User twiki page on the Fedora twiki. Mine is at https://fedoraproject.org/wiki/User:Anshprat

https://badges.fedoraproject.org/pngs/wiki-let-me-introduce-myself.png

Editing 10 times on the Fedora twiki earns you the https://badges.fedoraproject.org/badge/junior-editor

https://badges.fedoraproject.org/pngs/junior-editor.png

Participating in one of the Fedora meetings in #fedora-meeting in irc.freenode.net earns you a
https://badges.fedoraproject.org/badge/speak-up!

https://badges.fedoraproject.org/pngs/irc-speak-up.png

This is a brief overview of how to earn some of the badges. I will be updating soon with more badges and more details on the the steps mentioned above.

You can see all the badges at https://badges.fedoraproject.org/explore/badges

And the badges I have earned so far at :

https://badges.fedoraproject.org/user/anshprat

Moving from rackspace to digital ocean

I finally moved my hosting from rackspace to digital ocean (hereafter mostly referred to as DO). The reasons were simple – better config for half the price (especially in terms of memory). In rackspace, I was paying $10 a month for about 245 MB of RAM. In DO, I am getting 491 MB of RAM for $5. I had to resort to 5 minutes cron to keep restart httpd and cleaning up the cache to keep it sane on rackspace. Hopefully, things will be better at DO.

I first came across DO through facebook ads. The thing that caught my eyes was the SSD hosting. In my present job with Aeropsike, Inc, I deal with SSD on a daily basis and surely hosting my own blog on SSD was lucrative. Needless to say getting it at half the existing hosting charges was also enticing. I sat on it for a few weeks, finally got around to clear a DO account and left it short of adding my payment details (to search for a discount code). Few weeks later, I went back, added my payment details (sans any discount coupons) and went ahead to create my first droplet. The UI asked the hostname first thing at the top, and then some clicks to chose your OS version. I missed the hostname part first and selected fedora. On submit, the UI gave an error that hostname is missing. A quick scroll up and then the form was all green. DO boasts of 55 seconds to get your droplet up. While I did not actually time it, the experience was definitely faster than creating EC2 in Amazon WS and rackspace as well.

Screenshot from 2013-08-31 23:05:00

Screenshot from 2013-08-31 23:05:20

Its easy to miss the hostname if you scroll right down to the lower part of page where you do the size and OS selection

DO mails over your root password and then you are pretty much on your own. Here also considering am more comfortable with setting up my own environment using terminal, it was faster for me to create users and add my ssh keys than pre generated users etc.

I then quickly did yum install of wordpress to pull in the required dependencies, export and import from my older blog installation and a quick redo post changing the domain (only dropping the database and then import), my new install of blog was up and running. The reason I chose to reinstall the db for wp was that the first time I did install using stg.hackalyst.info/blog/wp and then changing the css and js links later would have been a pain. (Though now looks like wp has a way of specifying alternate install location in the configuration. Will check it out later).

After installing wp, I tried to activate my jetpack and I kept getting the error:

Your Jetpack has a glitch. Something went wrong that’s never supposed to happen. Guess you’re just lucky: xml_rpc-32601
Try connecting again.

Error Details: The Jetpack server could not communicate with your site’s XML-RPC URL. If you have the W3 Total Cache plugin installed, deactivate W3 Total Cache, try to Connect to WordPress.com again, reactivate W3 Total Cache, then clear W3 Total Cache’s cache.

A few quick web searches later I realised its because the DNS name has not yet propagated for the server. I waited for few hours and later it just worked fine.

Another problem I had with the new wp install was setting up the permalinks. On setting up the permalinks in configuration, I kept getting 404. I searched the docs a bit but found the solution in my own older post when I searched for permalink.

http://hackalyst.info/2010/02/17/setting-up-your-websiteblog-using-wordpress-on-a-slicehost-slice/

In short, I had to change
AllowOverride FileInfo

in directory directive in httpd.conf found in /etc/httpd/conf folder.

Rather this time, I decided to add the blog directory itself to the virtual host config and voila, it all worked fine.

Another warning I got while doing the wordpress install and configuration with apache httpd was

AH00548: NameVirtualHost has no effect and will be removed in the next releas

I wanted to know what the change actually meant and found this link in a comment here
httpd.apache.org/docs/current/vhosts/name-based.html which lead me to

http://httpd.apache.org/docs/2.4/upgrading.html#misc

The NameVirtualHost directive no longer has any effect, other than to emit a warning. Any address/port combination appearing in multiple virtual hosts is implicitly treated as a name-based virtual host.

Though I still haven’t found what the number AH0048 mean. Maybe I will have to dig into the source code or mailing lists archive to find the meaning of that number.

Coming back to DO, though they advertise SSD setups, the vm I am on says its rotational.

[root@hackalyst conf]# cat /sys/block/vda/queue/rotational
1

Will see if I can figure out the actual disk.

So far my DO experience has been good. Fingers crossed. Lets see how it goes. I will be disabling my rackspace server soon.

IPv6 is still missing in DO though. So I might get back to tunnel like how I was doing on slicehost before moving to rackspace.

Here is how to get ipv6 using tunnels. Though the blog post says in India, its geographic independent.

google.ps hacked

Looks like google.ps got its dns hacked.

Update below
Update 2- Looks like its a .ps registry hack rather!(based on HN)
Update 3 – Alls well again

[anshup@aero ~]$ host google.ps
google.ps has address 41.77.118.2
google.ps mail is handled by 0 google.ps.

[anshup@aero ~]$ host 41.77.118.2
2.118.77.41.in-addr.arpa domain name pointer abubakr.genious.net.

[anshup@aero ~]$ sudo nmap 41.77.118.2

Starting Nmap 6.40 ( http://nmap.org ) at 2013-08-26 23:33 IST
Nmap scan report for abubakr.genious.net (41.77.118.2)
Host is up (0.21s latency).
Not shown: 981 filtered ports
PORT STATE SERVICE
20/tcp closed ftp-data
21/tcp open ftp
22/tcp closed ssh
25/tcp open smtp
26/tcp open rsftp
53/tcp open domain
80/tcp open http
110/tcp open pop3
143/tcp open imap
389/tcp closed ldap
443/tcp open https
465/tcp open smtps
554/tcp open rtsp
587/tcp open submission
993/tcp open imaps
995/tcp open pop3s
2000/tcp closed cisco-sccp
3306/tcp open mysql
7070/tcp open realserver

[anshup@aero ~]$ dig NS google.ps

; <<>> DiG 9.9.3-rl.13207.22-P2-RedHat-9.9.3-5.P2.fc19 <<>> NS google.ps
;; global options: +cmd
;; Got answer:
;; ->>HEADER< ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:
;google.ps. IN NS

;; ANSWER SECTION:
google.ps. 21590 IN NS omar.genious.net.
google.ps. 21590 IN NS hamza.genious.net.

;; Query time: 2 msec
;; SERVER: 10.0.1.1#53(10.0.1.1)
;; WHEN: Mon Aug 26 23:48:13 IST 2013
;; MSG SIZE rcvd: 77

[anshup@aero ~]$ dig @8.8.8.8 google.ps

; <<>> DiG 9.9.3-rl.13207.22-P2-RedHat-9.9.3-5.P2.fc19 <<>> @8.8.8.8 google.ps
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER< ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 512
;; QUESTION SECTION:
;google.ps. IN A

;; ANSWER SECTION:
google.ps. 7367 IN A 41.77.118.2

;; Query time: 14 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Mon Aug 26 23:50:56 IST 2013
;; MSG SIZE rcvd: 54

UPDATE:

Looks like www.google.ps is fine whereas google.ps is hacked.

[anshup@aero ~]$ host www.google.ps
www.google.ps has address 74.125.236.55
www.google.ps has address 74.125.236.63
www.google.ps has address 74.125.236.56
www.google.ps has IPv6 address 2404:6800:4007:800::1018

[anshup@aero ~]$ host google.ps
google.ps has address 41.77.118.2
google.ps mail is handled by 0 google.ps.

Also, the site hosting the hacked google site seems to belong to this guy:

https://twitter.com/ElZakaria

https://www.facebook.com/preemptif

Update 2
Based on Hacker News, looks like its a .ps registry hack rather.
https://news.ycombinator.com/item?id=6278976
Looks like similar to the .ro (romanian) registry hack late last year.

Update 3

At around 0530 Hrs IST (0000 UTC), aug 27th, the DNS at genious.net seems to have been re-populated with proper gooogle ips.

;; ANSWER SECTION:
google.ps. 7349 IN NS omar.genious.net.
google.ps. 7349 IN NS hamza.genious.net.

;; Query time: 8 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Aug 27 05:20:01 BDT 2013
;; MSG SIZE rcvd: 88

;; ANSWER SECTION:
google.ps. 299 IN A 74.125.236.50
google.ps. 299 IN A 74.125.236.49
google.ps. 299 IN A 74.125.236.52
google.ps. 299 IN A 74.125.236.48
google.ps. 299 IN A 74.125.236.51

;; Query time: 86 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Aug 27 05:20:01 BDT 2013
;; MSG SIZE rcvd: 118

This because the TTL for the genious.net DNS was quite high preventing the google SOA from propagating.

At around 0722 IST, the SOA TTL expired from google’s own 8.8.8.8 DNS.

;; ANSWER SECTION:
google.ps. 149 IN NS omar.genious.net.
google.ps. 149 IN NS hamza.genious.net.

;; Query time: 8 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Aug 27 07:20:01 BDT 2013
;; MSG SIZE rcvd: 88

;; ANSWER SECTION:
google.ps. 299 IN A 74.125.236.52
google.ps. 299 IN A 74.125.236.49
google.ps. 299 IN A 74.125.236.51
google.ps. 299 IN A 74.125.236.48
google.ps. 299 IN A 74.125.236.50

;; Query time: 93 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Aug 27 07:20:01 BDT 2013
;; MSG SIZE rcvd: 118
;; ANSWER SECTION:
google.ps. 21599 IN NS ns2.google.com.
google.ps. 21599 IN NS ns3.google.com.
google.ps. 21599 IN NS ns1.google.com.
google.ps. 21599 IN NS ns4.google.com.

;; Query time: 114 msec
;; SERVER: 8.8.8.8#53(8.8.8.8)
;; WHEN: Tue Aug 27 07:30:01 BDT 2013
;; MSG SIZE rcvd: 120

Screenshot from 2013-08-26 23:49:27

screen vertical split rpm

I ve been using screen with vertical split for sometime now. And whenever I move my workspace to a new environment, its a fight to get either a build or a rpm with vertical split.

Recently I moved to centos 6.3 for my workspace usage and used the following rpm for install with glibc < 2.12

ftp://fr2.rpmfind.net/linux/fedora/linux/releases/15/Everything/x86_64/os/Packages/screen-4.1.0-0.3.20101110git066b098.fc15.x86_64.rpm

http://www.rpmfind.net//linux/RPM/fedora/devel/rawhide/x86_64/s/screen-4.1.0-0.15.20120314git3c2946.fc20.x86_64.html

dsc_0526-large

Drinks, Food, Music @ Plus91, Bangalore!

Last sunday, on April 7th, 2013, I had the opportunity to attend a bloggers-cum-food review meet at the newly opened Plus91 Cafe Bar, a food and drinks place that call themselves a Fast Food Restaurant, Snack Place and a Café , rightly so. This is another venture from the JSM Corporation Pvt Ltd, the same group that brings HRC, Shiro and CPK amongst others to you.

Considering the heat that Bangalore is facing this year, we started off with a very nice range of mocktails and cocktails including Virgin Mojitos, Virgin Mary, Pina Coladas and Long Island Iced teas. There are few places in bangalore which do a decent LIT and I guess the one at Plus91 is decent enough. We then started with a wide variety of starters ranging from a choice of street foods like pani puri, some chaats and miniature masala papads to veg and non-veg dishes including nachos, baby-corn and chicken preparations. All of them were mouth watering delicious. A special mention to the presentation where they were served in plates which looked like leaf mouldings. For the non-vegetarians, the Buffalo Chicken Wing at this place should be a must try! These were definitely one of the best ones I had in a long time in Bangalore, juicy, rich, succulent ones! Once we were done with the starters and the drinks, it was time for the real deal, the main course. We went in for the various sizzlers including a veg sizzler (a first for me), beef sizzler, chicken sizzler. The sizzlers were pretty good and are definitely worth a visit (and re-visits!)

By now, we were all pretty full, but there is always some place for desserts. We had Gulab Jamuns for desserts and it was one of the pleasant surprises I had. Usually its difficult to find a good “Gulab Jamun” when you go to any “big” place or chain. Not that they are bad, but they are not the desi, Indian feeling Gulab Jamuns. Some are either too big, or too small, or too soft or too sweet. But for once, the one at Plus91 were just perfect! I might already ve been intoxicated by all the fabulous food maybe, but it was the perfect end to a wonderful afternoon! To end it all, nothing else could surmise it up better than this tweet of mine…

Am definitely looking forward to visit this place soon with friends, family and loved ones!

Wired Up!

Long Island Iced Tea

Baby Corn! And the leaf shaped moulded plate.

Ye Nachos mujhe de de Thakur!

Me enjoying the sizzling sizzlers (L) with Santosh(R)

the delicious gulab jamun!

And finally the bloggers/tweeple !


PS: All photos thanks to @uniqgeek‘s post.

More pics at this facebook page.

Fedora laptop setup – Dell Inspiron 1420

I ve been using fedora since FC3. I bought my laptop in 2008 and since then I ve been using fedora on my laptop. I was on Fedora 7 when I first bought my laptop and today am on Fedora 18. After every install of Fedora, I end up looking up for some of the regular problems.. sound, disable hibernate/sleep on lid close, etc etc.

This blogpost is a placeholder for all such efforts going forward:

No Sound problem:

cat /etc/modprobe.d/snd-hda-intel.conf
options snd-hda-intel model=dell-3stack

Stop sleep/hibernate on lid close

$ gsettings list-recursively org.gnome.settings-daemon.plugins.power|grep lid
org.gnome.settings-daemon.plugins.power lid-close-ac-action 'blank'
org.gnome.settings-daemon.plugins.power lid-close-battery-action 'blank'
org.gnome.settings-daemon.plugins.power lid-close-suspend-with-external-monitor false

$gsettings set org.gnome.settings-daemon.plugins.power lid-close-ac-action 'blank'

I also found this..
http://nottooamused.wordpress.com/2012/12/29/fedora-17-and-18-how-to-disable-auto-suspend-when-laptop-lid-is-closed/


I tried Fedora 20 on a Dell Vostro 1450 Laptop, and got problems with the wifi card.

[root@aero anshup]# lspci |grep -i network
07:00.0 Network controller: Broadcom Corporation BCM43142 802.11b/g/n (rev 01)

The way to fix this in Fedora 20 is:

Install rpm fusion free and non free repo:

su -c ‘yum localinstall –nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm’

Now install akmod-wl
su -c ‘yum install akmod-wl “kernel-devel-$(uname -r)”