Wednesday, January 6, 2016

Pints: Azzaca vs Jarrylo Test Batch

The perk for signing up for the American Homebrewing Association last year was 4zo of Jarrylo and 4oz of Azzaca hops.  Having had neither of these hops, I decided to do a couple test batches to get an idea about their characteristics.  Both batches turned out well, but this was my first using water from a new water softener system.  My efficiency dropped to about 60%, down from a fairly reliably 70%.  It took me a couple batches to figure it out, but I've found that campden treated city water works much better than my softened water.  I brewed the batches on different days, so when I noticed the lower efficiency, I tweaked my process slightly.  This was also the first brew I did using a copper wort chiller instead of a water bath.

Overall, I liked the azzaca better.  It had an amazing fruity smell while fermenting and I think it would be great in any fruity IPA.  The jarrylo was good too, but it was much more subtle.  Without further stalling, a couple pictures and my notes from brew day/fermentation.
I later tweaked the chiller a bit.  Still was much faster than a water bath.

The finished product, after sitting in the keg for a while and clearing.  Both looked exactly the same.
Jarrylo Batch Notes
  • 4 LBs Maris Otter (doublecrushed)
  • .25 LBs C40 (doublecrushed)
  • .25 LBs C80 (doublecrushed)
  • Jarrylo (AA 14.2):  1/4oz @ 45 min, 1/4oz @ 15 min, and 1/2oz @ 5 min
  • 1 package of US-05
Expected Efficiency - 75%
Actual Efficiency - ~60%

8/21/2015: Brew Day
2 gal reverse osmosis water
1.5 gal soft water
Mash at 152, put in grain at 120.
Hold for 1.5 hours.
Mash out at 170 for 10 minutes.
40 min in, 7 brix
Into fermentor with 10 brix, 1.042
One hydrated packet of us-05
About 60% efficiency.  Fuck.  Differences were 30 minutes shorter mash and no Burton salts...and the water...used soft water!
Chiller worked OK...got me to 100F or so.
Airlock activity starting 8/23


8/26/2015: airlock activity just about stopped.  gravity is 1.012.  Stopped temperature control.

8/30/2015:  gravity is 1.012, started cold crash

9/1/2015: kegged.  Everything was the same as with the azzaca except the flavor.  Hops were more subtle.  More noticeable bitterness.  I expect it will be a fairly average beer, but the hops might be good to use for bittering other batches.
9/30/2015: Tasting.  Forgot to put in tasting notes earlier.  subtle bitterness, but more than the azzaca.  has a more standard beer flavor.  The jarrylo are slightly fruity, but it is quite subdued.  the C40/80 conflicts with it a bit I think.  all in all a fine beer, refreshing like the azzaca, slightly tart, I think from the CO2.  Good head and lacing, but the head falls fairly quickly.  Served 270oz.

Azzaca Batch Notes
  • 4 LBs Maris Otter (doublecrushed)
  • .25 LBs C40 (doublecrushed)
  • .25 LBs C80 (doublecrushed)
  • Azzaca (AA 10.0): 1/4oz @ 45 min, 1/4oz @ 15 min, and 1/2oz @ 5 min
  • 1 package of US-05
Expected Efficiency - 75%
Actual Efficiency - ~60%

8/22/2015: Brew day
Doing this one differently.  Mash at 152F for 2 hours.  Put in grain at 152F.
2 gallons reverse osmosis, 1.5 gallons soft tap water.
Starting Gravity was 1.044
Used tight chiller.  Worked better.  got temperature to a decent pitching temperature in about 10 minutes or so.  Still need some design improvements...water coming out was warm, not hot.
Airlock activity next morning.

8/26/2015: airlock activity just about stopped.  1.02.  Stopped temperature control.

8/30/2015:  1.012, started cold crash
9/1/2015: kegged.  Did pretty good in terms of trub.  Didn't bring much gunk into the keg.  Beer had a very tropical fruity smell.  The caramel of the malt might clash with it a bit.  Tasted a little thin, but I guess that's to be expected.
9/7/2015:  tasting.  Very light and refreshing.  Hint of caramel and tropical fruit on the nose.  Nice tang from the carbonation, it plays well with the tropical fruit of the hops.  Nice head and lacing.  Hops would do well in a fruity IPA.  Very little bitterness.  Served 268oz.

Bytes: Hyper-V 2012 r2 testing lab

For work the other week I was able to go back to Houston for a few days to work face to face with the team.  My main reason for going was to set up a Microsoft Hyper-V cluster capable of handling the load of the various testing initiatives we have going on.  I had several servers to work with in our testing lab, all good from a CPU perspective, but lacking enough RAM to really make this work.  After we ordered more RAM, I started planning it all out.  I was going to work with a bunch of technologies I hadn't really messed with, Starwind Virtual SAN, iSCSI, network aggregation, and Microsoft Scale Out File Servers. Two of the servers were already in use for other purposes, so I would need to thread carefully around them as to not disrupt ongoing tests, and another two were already being used for a small Hyper-V cluster.

I wasted no time setting up the first thing I wanted to test: network aggregation.  I plugged in three of the 1Gb NICs on my server, configured port channels on the Cisco switch, and configured LACP teaming on the server.  It worked, perfectly, the first time.  It was able to transfer data at about the 3Gb combined speed it should.  This early success had me feeling pretty confident. 

The next thing I discovered is that Starwind Virtual SAN works pretty well.  I had no problems carving out an iSCSI disk and presenting it to my small Hyper-V cluster.  Another easy victory, with a little fiddling the Hyper-V cluster recognized it as usable storage for a cluster shared volume.  This brought me to another thing I learned: iSCSI can be a pain to set up.  In order to get your storage to pass the Microsoft cluster validation, each server needs multiple paths, network or otherwise, to the storage, so if one fails, you have a backup.  Getting multipath to work was a bitch.  The "discover multipaths" tool that Windows Server provides needs a reboot each time you try it, and it doesn't always work.  Of course, it didn't work for me.

In the end, I manually started multiple connections to the iSCSI targets.  The validation tests passed, and all looked well...until I moved a VM to the storage and benchmarked it.  Pathetic, slow, transfer speeds are what I found, a mere 10MB/s or so.  Off to google.  After much searching, I found the answer and confirmed it; iSCSI and network teaming to not play nice under Windows.  The iSCSI initiator was passing over my fast team and using the crappy 100Mb "backup" network that was plugged into the server.  Okay, fine, turn off teaming and disable the backup network for now.  Better speeds, but still not great.

Fast forward a bit, and I'm able to get the three servers I had earmarked for my Hyper-V cluster set up and running well.  Ran into some minor problems with creating the cluster object that were fixed by delegating computer object creation to the cluster, but after that it was fine.  I moved my VMs on to the iSCSI CSV that the cluster was using and got to work on configuring a more highly available storage solution.  Performance still isn't very good on the storage end.  Getting less than 1Gb of throughput in benchmarks when I should be getting...more...with 3 1Gb NICs using MPIO.

On my storage servers, I have two RAID5 arrays on each server, both with more than 3TB capacity.  On one of these arrays, I used Starwind to create a replicated virtual disk.  10Gb NICs in the servers, directly connected, ensure that a fast network is available for speedy replication.  Once that is created you can connect to it using iSCSI from both machines, and this creates a shared storage object that can be used as a CSV.  Cluster validation passes, and I set up a scale out file server on the new cluster with an application file share.  Once running, I started migrating VMs off of the CSV and on to the SMB3 share.

Once everything was migrated to the SMB3 share, it worked quite well.  Disk transfers were a bit slow, but over all things worked well...until I tried doing a live migration.  While quick migration went fine, live migrations would slowly go until they failed with a fairly unhelpful error.  After much troubleshooting and talking with Microsoft, it was discovered that the delegation of the CIFS and Microsoft Virtual System Migration Service was incorrectly applied by the script I used.  After correcting that small issue, live migration worked perfectly and ran at an appropriate speed for the network.

After several months of running this cobbled together mess, I am still surprised by how stable it is.  Other than mediocre storage performance, the I haven't really had any additional problems with it