A Warning on the Current Apple Public Betas

MacOS High Sierra

As usual, I decided to go ahead and install the public beta of MacOS High Sierra. While this is public beta 1, it’s the same as developer beta 2. So far, not so good. While most things still work dependably. there are graphics issues (probably related to the new Metal 2) that prevent you from using Final Cut and iMovie, so don’t update if you depend on these applications regularly (I haven’t tested Premiere). The current version of Firefox seems to struggle with web pages that contain Flash content and Safari struggles with page loads and dynamic content. Some pages seem to get stuck with stale information and page refreshes sometimes don’t do anything either.

iOS 11

I’ve also been testing public beta 2 of iOS 11. The last version was really bad! Many of the most noticeable issues are smoothed out now, but there are still issues. Things like the new control center not toggling bluetooth, even though it indicates that it is. I’ve also noticed that messages and notifications sent to watchOS (non-beta) are several minutes late. I’ll get an iMessage from someone and read it on my phone, then several minutes later I’ll get a notice on my watch.

Considering these problems are pretty apparent and easy to reproduce, I imagine they’ll be resolved quickly. Probably by the next public betas. Overall, I would recommend giving iOS 11 a try, while avoiding MacOS High Sierra for now until a later build is available. I’ll update this when a new version comes out and I’ve had a chance to evaluate it.

Some Concerns About the Direction of the Mac

I’ll get right to it: I’m concerned that Apple is selling MacOS short in terms of capabilities in order to promote their proprietary technologies. Don’t get me wrong, the Mac is still a great tool for getting things done, and it’s still mostly a joy to use on a daily basis, but if what you’re trying to do strays from Apple’s vision then things start to fall apart.

Take graphics, for example. Steam for Mac was a huge development that really helped people take gaming on the Mac more seriously and it’s clear from the Mac App Store listings that developers small and large would like to sell games for the Mac. Rather than helping MacOS and the Macs they run on to reach their potential, Apple has consistently opted to ignore new technologies, like Vulkan, in favor of their own Metal when they should be supporting both.

The lack of DirectX in MacOS has led developers having to choose between building support for an increasingly antiquated OpenGL implementation, adopting Metal and having to support 3 graphics APIs across Windows, MacOS, and Linux, or ditching MacOS entirely. All too often, the later is chosen.

Year after year, Apple releases Macs with compelling hardware, only to sell that hardware short with poor software support. How many of you end up installing Windows in Bootcamp just so you can play your games? The real answer is ‘Too many’. I remember when testing the Oculus Rift, the performance on OS X was so poor that I had to boot into Windows to run a simulation smoothly. This largely contributed to Oculus Rift ultimately dropping support for the Mac entirely.

There are other areas where Apple has clearly done a poor job on something just because it isn’t their favorite thing at the moment. With Xcode, Apple’s software development environment, Apple has stuck their noses up at providing sorely needed new features for C and C++ developers for years while providing plenty of features found in more modern code editors for their preferred language: Swift. You can’t code games in Unreal or Unity game engines in Swift though, so they again shoot themselves in the foot in this area.

And, as an IT professional, I could go on and on about the unfortunate decisions Apple has made in regard to supporting Macs in an enterprise environment.

Here’s the deal: we know Macs aren’t gaming machines. They aren’t marketed as such either. But we DO expect Apple to commit themselves to making these machines as powerful and flexible as possible. That is, after all, what the marriage of hardware and software is supposed to be all about. I feel like the Apple of 5 years ago understood this and couldn’t wait to implement a new technology that would make their product stronger. The Apple of today, however, is more concerned about whether or not a new technology will make the company’s portfolio stronger. That demonstrates a clear shift of focus from care about the customer to care about themselves and that, among other reasons, is why I’m concerned about the direction of the Mac.

Feel free to tell me what you think in the comments below.

Is The New (2016) MacBook Pro A Good Buy?

I went ahead and upgraded from my mid-2012 15″ retina MacBook Pro to the new 15″ MacBook Pro with touch bar. Here are some of my thoughts on the subject.

The old 2012 MBP was actually still a good computer, and after 4 years of daily service as my main computer, that’s pretty impressive. It, like the new laptop, had 16GB of memory, a 1TB SSD (I upgraded the SSD after the fact when the 256GB SSD that came with it became too cramped). While the new machine sounds similar on paper, it feels much better. Sure, the RAM and storage have the same capacity, in my case, but they are both MUCH MUCH faster! I had become so accustomed to the performance of my old machine that I could actually feel when a performance snag was a disk IO issue and I notice no such problem now. The faster RAM also ensures that applications stay snappier once they are read from the disk. While these things do contribute to a better experience, they aren’t necessarily the primary improvements compared to previous generations. So what are then?

The keyboard

I also own a 2015 MacBook with that introduced the new “butterfly” switched keyboard and I can say without a doubt that the version included in the 2016 MacBook Pro is worlds better. It is a bit noisier, but it’s actually got a sort of satisfying sound to it. Key travel is noticeable increased and improved over the former version and it’s actually a keyboard technology that I look forward to seeing in Apple’s standalone keyboard offering.

The backlighting is also a huge improvement over the previous generation. There’s not nearly as much light leakage around the keys (hardly any, in fact) and it’s more comfortable to look at the keys in low light situations, which is the whole point of the backlit keyboard. Now, if only Apple would include the backlighting in it’s standalone keyboards.

The trackpad

The trackpad is huge. I know that lots of people comment on that in the unboxing videos and whatnot, but it’s hard to fully understand until you actually have it in front of you. I’ve had a couple of instances of accidental input while typing, but overall, the palm rejection is good enough for it to not be a problem. I imagine it’ll only get better with future software updates to MacOS. I used to have to sort of gauge how far I needed to drag something and ramp up my drag speed to ensure it reached before. With this trackpad, however, it’s never really a problem to drag things from one edge of the screen to the other. Overall, I like it.

Form factor

The computer’s smaller packaging and lighter weight actually makes me more likely to take it out and about with me. Using it in my lap feels comfortable and enjoyable, whereas the old 2012 model felt a little too large to comfortably maneuver. It may not look like a huge size difference in the photos, but it actually does make a difference.

The speakers

The speakers are wonderful. Hands down the best speakers of any laptop I’ve ever come across. I watched a movie with them last night and was amazed every time I remembered that I was listening to it on the laptop’s built-in speakers. That Beats acquisition was a good idea for Apple.

USB C

A lot of people are upset about the inclusion of 4 USB type C\thunderbolt 3 ports as the only IO ports. I actually think this is a good idea. It’s painful for now, because we are in a transitional phase for the next year or so, but almost all of your basic accessories will be either USB-C or Thunderbolt 3 in the near future largely because of Apple’s decision to do this. They are one of the few companies with the pull to really get the accessory makers behind this and I’m glad. No more worrying about having enough of this port or that port, no more worrying which direction I need to plug the cable in and no more worrying about which side I want to plug the charging cable into. Dongles may be inconvenient for now, but it’ll be nice in the not so long run.

The touch bar

Notice that I saved this for last? Some people complain that the novelty wears off quickly, but that’s misleading. The reason why is that it’s a tool, not a novelty. Getting Doom to run on it or building apps for it is missing the point entirely. Keyboard shortcuts are always going to get you where you need to go faster, but you can’t honestly memorize every keyboard shortcut for every app you use and that’s where the touch bar really comes in. It’s much handier than digging through a menu for the tasks that you don’t have keyboard shortcuts memorized for and it can do so much more than a shortcut can. I think of it like I think of a Fluke toner or a DeWalt drill. It’s not something I use every day, but it’s nice to have the extra quality when it does actually count.

Final Thoughts

So, is it worth it? Well, that depends. $3,200 is quite a lot of money to spend on a computer and I honestly think they should shave a few hundred dollars off of the price tag. That said, if you aren’t scared off by the price and your computer is important enough to you to justify spending that kind of money on it, the new 2016 MacBook Pro is a solid computer that I doubt you would end up regretting.

Knowing When Not To Use Drive Snapshotting Software

Whether it’s Drive Vaccine, SmartShield, Deep Freeze, Time Freeze, or the old Steady State, there are pitfalls to using these tools and using them incorrectly or in the wrong situation will cause more harm than good. If you’re considering (or already are) using one of these programs, you should consider what your use case is and whether or not it’s really the right move.

Problem 1: Drive snapshotting causes performance degradation

Simply setting a snapshot (or baseline or whatever terminology the vendor decides on) and giving the computer to a user is a bad move. If you don’t set the computer to automatically restore to that snapshot everyday or sooner, the snapshot will just grow and grow and it won’t take long at all before the computer has to struggle to compute every block of commonly needed information against the original baseline. The more blocks need to be reconciled, the slower the computer will be. After a few weeks, the computer will be so sluggish and unresponsive, that your users will either start putting in tickets complaining about performance issues or they will just sit there and silently think less of you. These problems can also lead to other problems, like software installers stalling for no apparent reason.

It’s important to note that it doesn’t matter how recent the last snapshot is. The computations must be made against the baseline taken at the time that the software was initially installed. The only way to reset this situation is to completely uninstall and reinstall the software.

Problem 2: Drive snapshotting is a lousy backup strategy

The rapid recovery speed of a drive snapshotting technology may make you tempted to take a snapshot and leave it this way in case you need to restore to a working state at some point in the future. The performance issues mentioned in problem 1 make this a really crappy idea. Having a computer that’s slow all of the time is a poor tradeoff for being able to restore quickly. Just use drive imaging software (Clonezilla, Ghost, etc) or some other traditional backup strategy like a sane person. Sure, the recovery time is a little longer, but it has literally no impact on the daily operation of the computer. Of course, these backup strategies suffer from issues too, which leads us to the next and last point:

Problem 3: Snapshots become old and unreliable too

Reverting to a snapshot is like using an old\stale image to setup a computer. Say you take a snapshot and give the computer to a user. A year later it breaks and you go to restore to snapshot. Well, congratulations, you now have a broken domain trust relationship, old versions of browsers, plugins, etc, and are still missing all of the user’s newer data that was created\modified after the snapshot was taken. You now have an additional 30 minutes of work to get a computer to, what I consider to be, an unclean and unpredictable state.

When you use a modular imaging solution that stays up to date, like MDT, why not just simply put a new image on the computer and be done with it?

Conclusion

Essentially, the take away should be this: Only use drive snapshotting in situations where the computer will be automatically restoring to the snapshot frequently (daily or sooner). Don’t use snapshots as a longer term disaster recovery strategy. Either reimagine the computer like everything else or capture an image using cloning tools if the setup is really that precious.

Setting Up An MDT Test Environment and Workflow

I’m not going to waste time going over how to install the Windows 10 ADK and MDT. There are guides all over that cover that topic well. Instead I’m going to explain my strategy to implementing this setup so that I can affect a crude form of versioning and testing and afford myself some flexibility in the environment.

The goals here are simple:

  • Have the ability to experiment with changes without risking the production environment
  • Being able to hold on to changes that work and still have a space to experiment
  • Be able to easily revert to a working setup when experiments muck things up

So, with that in mind, here’s what my setup looks like. I have 4 deployment shares: two on my admin workstation, and 2 on our MDT server. One of the shares on my workstation is a “build” share that used solely for generating reference images. Booting a VM from the litetouch ISO in this share automatically deploys and configures Windows, installs Office 2016, and installs all updates from our WSUS before sysprepping and recapturing. This is a special purpose share that may or may not have utility in your environment, so lets just ignore it for now.

The other share on my workstation is a “development” share. As I’m working out problems or experimenting with a new OS or whatnot, I do everything in this share. Nobody uses this share but me, so it’s segregated and safe from the production environment. There is, however, a couple of limitations of this share. Once I have something completed in the development share but am not ready to post it to the production share, what do I do with it? Also, if I have a problem mostly worked out but need to continue experimenting, I need somewhere else to save my progress. Enter the “testing” deployment share.

The testing share exists on the MDT server. As I make significant progress on an issue, it can serve as a staging ground to store that progress while I continue experimenting. This way, should I hose things up afterwards, I can just grab another copy of the testing share and try again. It’s also a place where I can push changes to from my development share to ensure it merges correctly with existing files before then moving those changes to the production share.

The last share on the MDT server is, of course, the production share.

The method by which I “push” my changes around are with linked deployment shares. You can easily create linked deployment shares by right clicking the “linked deployment shares” folder under “Advanced Configuration” for a given deployment share. You have 2 different types of replication for these, and they are Merge and Replace. I typically use merge for pushing changes and replace when something goes wrong.

Here’s how I have it set up: Development share (Merge)=> Testing share Testing share (Merge)=> Production share Testing share (Replace)=> Development share Production share (Replace)=> Development share

With a setup like this, I can push changes back and forth between the development and testing shares, I can promote a testing share setup to the production share, and I can recreate the production share in the development share for a clean slate. Notice that there’s no way to sync to the production share directly from the development share. I also migrate to the testing share first and do a test deployment to ensure that the sync didn’t overwrite something it shouldn’t have or cause some other sort of general wierdness.

This setup keeps the production share clean, stable, and ready for use while I continue to tinker and improve things.

A few things to note about using Linked Deployment Shares:

  • You have to configure the properties, CustomSettings.ini, and Bootstrap.ini for each share individually. They do not copy over during a replication.
  • You can copy and paste your CustomSettings.ini between them for the most part, unless you have a path included in it that points to a specific share. Then you’d obviously have to update that part for each one 😛
  • You can copy and paste your Bootstrap.ini between the shares too, but be sure to update the DeployRoot to reflect the UNC path to the specific share.
  • You should to run “Update Deployment Share” for each share to get boot media to use with testing each share.

Hopefully you find this helpful. This setup may not work or may be overkill for your environment, but you can always take the parts you like from this and make your own variation that best suits your needs. At the vary least, hopefully I’ve given you some thoughts to consider when contemplating your own OS deployment setup.

Cheers!

MDT Enables UAC for Built-In Administrator Account

After setting up MDT for our organization, a coworker pointed out some issues with starting a new litetouch deployment from within an existing Windows 10 installation. When I saw it, it was obvious the issues were related to UAC. The problems were:

  • The user was being asked for network credentials immediately at the beginning of the wizard.
  • After answering all of the questions and completing the wizard, the computer would reboot into PE and proceed to start the whole wizard over again, forgetting all of the previously provided answers.

I checked and found, sure enough, the local policy “Enable app approval mode for built-in Administrator account” was set to enabled. I couldn’t figure out where or how this was getting set, so I ran some test deployments and found MDT itself was turning this on at the very end of a litetouch deployment!

Turns out, this is set this way by a few lines near the beginning of the LTICleanup.wsf file. Microsoft did this for versions of Windows 8 and above so that the built-in Administrator account could open Windows modern apps, such as Edge. I’d never encountered this issue before because I’d only ever used MDT with Windows 7 in the past.

I disagree with this decision not only on the grounds of it being mostly useless and worse, encourages people to use the Administrator account for daily use, but also because it causes the before mentioned issues on future MDT deployments.

To fix this issue, simply comment out lines 144-150 in LTICleanup.wsf that reads:

If oEnvironment.Item("OSCurrentVersion") <> "" then
  oUtility.GetMajorMinorVersion(oEnvironment.Item("OSCurrentVersion"))
  If ((oUtility.VersionMajor = 6 and oUtility.VersionMinor >= 2) or oUtility.VersionMajor >= 10 ) then
    oLogging.CreateEntry "Re-enabling UAC for built-in Administrator account", LogTypeInfo
    oShell.RegWrite "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\System\FilterAdministratorToken", 1, "REG_DWORD"
  End if
End if 

You can comment each line by simply adding an apostrophe to beginning.

You can find LTICleanup.wsf in the Scripts folder of your deployment share.

Cheers!

An update on my thoughts of iOS 10

A while ago, I wrote that my impressions of iOS 10 were less than stellar. With the announcements surrounding the next iPhone, Apple Watch and, presumably, iOS 10 right around the corner, I wanted to share my updated impressions.

I initially noted quite clearly that iOS 10 was very unstable. As I figured would happen, most of those issues are cleared away now. Apps no longer crash and the entire experience is quick and reliable. I’ve also added an iPhone 6S Plus to test with my last-generation iPhone 6 Plus and confirm both are as reliable as each other.

A few other changes since last time, some new system sounds have been added to indicate locking the phone and different types of keyboard sounds. At first I thought the key sounds were sort of goofy, but they actually do provide a nice breath of fresh air in terms of providing positive feedback to your typing.

Overall, I’m really liking the new feature set, from improvements to Maps, Messages, & Mission Control to the new color scheme employed throughout the operating system.

If you’re worried about upgrading to iOS 10 when it first comes out, I’d say you’re relatively safe doing so right away. All of the major kinks are worked out a this point and anything left would likely be a minor annoyance that’ll have to be swept up by the later point releases.

Now, if only I had an iPad to test with…

My thoughts on iOS 10 and macOS Sierra public betas

At the time of this writing, macOS Sierra and iOS 10 are both available as public beta 1. I’m using Sierra on 2 different laptops and I’m using iOS 10 on my iPhone 6+. I like most of the new features, but boy are these releases rough! I’ve used Apple’s public betas for OS X before and I’ve used the developer betas for  iOS before, but these are the most unstable and performance challenged betas I’ve seen yet.

Sierra runs noticeably better on the newer of the two laptops (a 2014 MacBook). On this one, performance problems remain and app crashes are common, but at least the computer itself stays running. On the older of the two, however (2012 Retina MacBook Pro) this thing is a nightmare. The entire OS crashes after every couple of hours of use. If you let it sit for any length of time, it will likely crash when you come back to it. Sometime it won’t even boot successfully. Even when it does boot, browsers fail to load the Flash plugin frequently, Siri won’t open, you get the picture.

I would highly recommend waiting until things get a little more polished and hold out for at least the next version of the public beta. Just my two cents.

Why The iPad Pro Wasn’t Right For Me And How It Could Be

I’ve owned several iPads so far. I’ve had the iPad 2, Mini, 3rd gen, Air, and then the Pro. I currently have none, and I’m okay with that and here’s why: I’m a power user.

Now, don’t get me wrong, I know the iPad (and similar tablets, for that matter) are not supposed to replace a PC. It’s just that I find myself having to jump to a PC so often that I feel like it’s less convenient then simply starting on the PC to begin with. What’s worse is that, half the time, when I realize I can’t do what I’m wanting to do on the iPad, I’ll simply give up and not do it rather than go switch over to a PC. That’s decidedly less productive.

I was super excited when they announced the iPad Pro. And it turned out to be for good reason. The iPad pro came so very close to fulfilling my needs that I almost considered ditching the laptop for it. I used it with the Smart Keyboard and found it comfortable and versatile enough to use it anywhere and in any situation. The iPad itself was way more powerful than I could make use of and I liked that about it. It felt like I could take on anything with it.

The new multi-tasking capabilities also gave the impression that I could accomplish similar things that a normal power user would on a computer. But it turned out that it just wasn’t quite there.

The new features introduced in iOS 10 are welcome improvements, but I feel like some of them should’ve been there from the get go (I’m looking at you split screen Safari). As brilliant as the split screen concept is, each app has to be written to take advantage of it, which means that many apps won’t work with it at all. Even for the ones that do, the app-choosing method for the split screen app needs improvement since it’s essentially a long vertical list of apps that you flick through until you (finally) find the right one.

The Problem

I like to dabble in all sorts of development. I found Coda for iOS was an excellent solution for web development. All of your code changes are synced via FTP to a web server and it allows you to quickly view those changes on the server. It even comes with an nice SSH client. Using this, I was able to reasonably develop in PHP and Laravel on a remote dev server and I found this comfortable enough for my needs.

The main problem is that coding involves lots of multitasking. You’re typically doing a lot of research on the Internet (at least I do) while crafting your code in another window and iOS’s implementation of multitasking and split screen windows was just too cumbersome for this.

Another problem is that any sort of code that needs to be compiled (java, C++, etc) was a non-starter for iOS. The Swift Playgrounds app was a nice addition to iOS 10, but I still don’t think we’ll be building iOS apps within iOS itself anytime soon. I suppose this is a fringe case and the reality is I’d still be alright with this issue provided I could comfortably write the code itself on this go.

The solution

The situation will improve once more apps have been written to take advantage of multitasking, but I’d like to see multi tasking that works wether the app was written for it or not. I’d also like to see the ability to split the screen into quadrants (sort of like Windows 10) and I’d like to see better\more intuitive ways to create split screens to begin with.

I’d also really like to see a multi desktop mode (like Spaces in OS X) so that you can quickly switch between multi split-screen environments.

The future is looking bright though

Despite these setbacks, I’m happy to see that Apple is continuing to focus on improving the multitasking experience and it seems so close. I feel like the true power of this platform will be recognized once people can be as capable on these devices as they are on a computer, at least for a majority of things. The problem is that right now, many people already meet this criteria not because the platform is where it needs to be, but because their computer skills weren’t that great on a traditional computer to begin with.

The computer power user will likely still find these devices constricting and unproductive until multitasking is as intuitive and feature complete as it is on a traditional computer, whatever that ends up looking like for a tablet.

I’m thinking we’ll probably find the solution within the next 2 years and I imagine that the current generation of hardware will end up being capable of running this solution. Whatever the case, I’ve decided to hold off on trying any more iPads until I’ve seen that iOS can accomplish more of the things I need it to do on a daily basis.

Moving across the country for a new job

First of all, welcome to the very first post on my new blog. This blog will end up being a place where a variety of topics are discussed from work skills and explorations into technical topics to life experiences and thoughts surrounding them. It’s quite a wide net to cast, but it’ll make it easier to find interesting topics if we don’t restrict ourselves too much out of the gate.


Enough on the blog itself, and on to the topic of this post. I’m finally getting to the point in my career where I’m actually be flown across the country for a job interview. I’m looking at the prospect of moving my wife and I from Louisiana to Wyoming – a roughly 2 thousand mile journey!

While the location is not a coincidence, in fact we are specifically looking to move to Wyoming for a myriad of reasons, the job also happens to be a significant step up for me in my career. I’d be adding more server and network infrastructure duties to what I’m doing now and it sounds like I’d also be working in part of a more agile team of technicians.

My current work environment is very compartmentalized which leads to inflexibility and a lack of a sense of community. The model we follow works well in larger enterprises that need that level of scalability, such as IBM and Microsoft, but it only serves to get in the way in smaller groups (less then 10, in this case). I’m looking forward to working with a similar size group but in a more open environment where we literally take care of problems from beginning to end and focus more on finding solutions than excuses not to help.

But this post isn’t about that.

This post is about the daunting task of packing up my toys and moving across the country for a different life. The logistics are mind numbing, really.

There’s the initial drive out there to get my car and things I’ll need on a daily basis out there. I’ll have to rent a hotel for a week or so until I can get into a cheap apartment, where I will stay until our current house sells.

I won’t have any furniture when I first move into this apartment: no bed, no table, nothing. I’ll probably use an air mattress, folding chair and small folding table to get by with initially. To be honest, I don’t want a lot of furniture or stuff with me in the apartment, because that’s more to deal with when the house finally sells and we move into a more permanent location.

Once my wife gets her RN license, she’ll be looking for a job out near me. When she finds one, I’ll fly down and rent a U-Haul and move all of our stuff the 2 thousand miles and put it all in a storage facility. Having the house empty and our stuff within reach of our new location should make the move feel a lot more complete.

Then we just deal with the realtor to sell the house and find a new home to move into!

What do you think? Have you ever had to make such a large move? How did\would you handle the logistics of such a situation?