When you denormalize, do you store the foreign keys?
The situation is as follows. I have a normalized database that stores a form template that people fill out to report their experience in different skills. The template form will change over time. Phrasing will change, and the number of skills will change. Once submitted, their data will not change. Users should be able to recreate the form that was originally submitted no matter how much the template may have since changed. Therefore, the system should capture the state of the templates when people submit their reports.
In these sorts of situations, I denormalize the data, storing the template data as well as the responses.
I could leave it normalized and enforce logic that deactivates a template (in whole or part) when it changes, but that is, in my opinion, generally overkill.
My question is, should I store the foreign keys? When I used integer keys (all row ids are surrogate keys), I would have stored them just because integers are cheap and having them might help troubleshoot problems. Now that I’m using GUIDs, I’m less certain the potential benefit is worth the cost.
It’s unlikely the use of the denormalized data in this specific instance will be so intensive that performance is likely to be a problem, but it will grow, it is storing mostly string data (mostly nvarchar). I’m also curious what people think about this.
I’m not completely close-minded, but it’s unlikely I’ll change my mind about using GUIDs and denormalizing. Given that, what’s your opinion on storing foreign keys in denormalized data?
Why did you decide to make outgoing email delays a rule instead of an email account option? The change is a bother because it has no benefit and only costs. All outgoing email is now delayed, and I used to set this option by account, so I’ve lost flexibility. In addition, outgoing email is delayed rule is satisfied. In earlier versions I could force the email to be sent by clicking “send all,” which I often want to do. Therefore, I’ve also lost an important piece of control I used to have.
Changes that result in less flexibility and less control with no balancing benefit make me very, very cranky.
Outlook 2013 (desktop) bug
If you have a rule on your outbox that delays outbound email, then you may have run into a problem whereby an edited and resent email never goes out.
This happens if you edit the email and resend it while Outlook’s focus is on the outbox. If you set the focus to, for example, the inbox, then click send, the email will be marked as outbound and leave when you expect it to.
Steps to duplicate the problem:
- Set up a rule to delay outbound emails. Click here to see how (and why) to do that
- Create an email message, and click send
- Click on Outlook’s Outbox
- Open email created in step 2.
- Click Send
Notice that the email message is now shown in regular text, not italics. Messages use italics if they are queued to go out. If you edit message and save it, but don’t click send, you see the same behavior.
To work around the bug, before clicking send after editing the email, click on another Outlook folder such as the Inbox. Click send on your email, and it will be properly queued.
From Outlook’s point-of-view, this may be by design, but, in my opinion, it’s a bug. It’s a change in behavior from (all) earlier versions, and the behavior is different depending on whether the Outbox has focus. In fact, this bug occurs just by going to the Outbox and clicking on, but not opening or editing, the email.
I’ve inherited another legacy VFP application and am updating a routine to use a new format required by my client’s customer. There is a VFP form that eventually calls a program to gather, format, and export relevant data. One of the first things I need to do is change the form and program so their code can be independently tested. Once that’s accomplished, I’ll change the routines to generate the new format.
The question is, as always with this common situation is whether to pull the code into a “blob” method in the form and change all variables to properties, or call the program from the form with parameters. I realized my usual inclination is to use the former because eventually I would be binding the form properties to properties on future business classes that I’d create down the road. It’s odd how I’ll suddenly think of a usual problem in a different light. This is a routine that’s run four times a year, the new format is needed soon, and the client wants to begin planning for a new application to replace this one. Visual FoxPro will probably not be the tool of choice, so there is no reason to build the Queen Mary when a dinghy will get us across the narrow inlet just fine.
Can you tell I’m prone to make everything into A Work Of Art? Maybe I’m changing after all.
Recently, I found out the easy way, thankfully, that there is a critical reason for not using Windows file and folder encryption. The files and folders will be unavailable if the drive on which they reside is moved to another computer!
I had to revive my previous laptop in order to run an old version of software incompatible with Windows 7. I felt inordinately smug that 1) I hadn’t yet cleaned and dumped the laptop, and 2) I use a removable hard drive for many of the tools my two laptops would need to share.
Luckily I was only stuck slightly when, on-site at a new client’s, I couldn’t get to information I wanted to use from the removable drive. The files had been encrypted on the drive while connected to the other laptop.
I say luckily, because, while searching for a work around (or reality check) I read posts from people who found out only when a computer had died and they installed the old drive in a new system. Yikes.
I expect this may seem obvious to some of you. It wasn’t to me. And it effectively kills any reason whatsoever to use it. Indeed, I’d go so far as to call it a danger.
I haven’t settled on an alternative, so if you have a personal favorite, I’d like to hear about it. And if I’ve just missed the work around, I’d be pleased to hear about that, too.