Well I got confirmation that I've bought one but no email as yet. So the wait goes on!
Apparently if you log back into your account and go to manage railcard you should see a code. I don't, I just get error 500
Well I got confirmation that I've bought one but no email as yet. So the wait goes on!
There's something not quite right when this railcard 10,000 railcards have been available to Greater Anglia residents for weeks, and then another 10,000 for the rest of the country to fight over.
This is what puzzles me as well, presumably they had enough data from this successful trial to extrapolate whether it would be successful nationally? Otherwise what was the point of the first trial.
A paper version will have its place, but paper is not going to be the way forward, as evidenced in plenty of other industries.
Paperless has a long string of benefits for both the industry and the customers (but also pitfalls of course). Taking TfL's lead, I can imagine differential pricing between paperless and ticketed versions of the same product in the near future, in an attempt to move people onto paperless ticketing.
Most people would prefer not having to faff around with a ticket which cannot be replaced if lost, and in the overwhelming majority of cases there is absolutely no issue. Of course some technical details will need ironing out, and there will be problems only surfacing when the new system is around and in use, but that is the same with anything new.
I wouldn't take too much notice of the general mood on this forum as a barometer for the mood of the general public, but this is a good place to gather information on potential issues that may be encountered.
On the Railcard trial, 10,000 does seem a bit of a silly number though, especially when made available to the whole population.
Surely the demand for these railcards, combined with the need for it in the first place (and the provision of other discount cards on the railway) prove that the fare structure, as it stands, is too expensive and suppressing demand?
Speaking of which, have any of the other Anglian trialists been receiving and filling in the monthly e-mailed surveys about their use of the railcard? The survey is very clearly trying to gauge exactly that: the suppressed demand that the railcard might be unlocking. Indeed, for each of the three surveys I've filled in so far I've reported that I've made more off-peak leisure journeys by rail as a result of having the discount.
“Real Virgin” (West Coast) are allowing all customers aged 26-30 to purchase and use 26-30 Railcard discounted tickets if they present an Avocado next week (March 13th-20th)
https://www.virgintrains.co.uk/avoc..._Media&utm_campaign=Avocard&utm_content=Mar18
I hope ATOC can find a way to cancel the cards of people who used bots or otherwise circumvented manually waiting in the queue to obtain one.
Speaking of which, have any of the other Anglian trialists been receiving and filling in the monthly e-mailed surveys about their use of the railcard? The survey is very clearly trying to gauge exactly that: the suppressed demand that the railcard might be unlocking. Indeed, for each of the three surveys I've filled in so far I've reported that I've made more off-peak leisure journeys by rail as a result of having the discount.
People are blaming the computer crash on ATOC etc. For once I disagree. It seems to me to be in the nature of computers that they are susceptible to technical glitches, regardless of who sets them up. Low Tec is far better for this customer at any rate.
But it's true as you say, that we will get a choice - of what they want to sell us.
Apparently if you log back into your account and go to manage railcard you should see a code. I don't, I just get error 500
I hope ATOC can find a way to cancel the cards of people who used bots or otherwise circumvented manually waiting in the queue to obtain one.
People are blaming the computer crash on ATOC etc. For once I disagree. It seems to me to be in the nature of computers that they are susceptible to technical glitches, regardless of who sets them up. Low Tec is far better for this customer at any rate.
But it's true as you say, that we will get a choice - of what they want to sell us.
Finally managed to get one at around 5pm just through hammering the refresh button on my phone while travelling to watch football! Feel sorry for those who had tried thoughout the day and didn't manage to get through to the site though.
To be fair, there wasn't a queue at all! It just seemed pot luck if you go through or not. Obviously their web server just couldn't handle the load so it was just luck if you managed to attempt to load the site at the exact right time someone else had left (so there was a little bit of capacity for it to deal with your request).
And that is partly why it was so poor. Even if you can't be bothered to actually make sure your site has the capacity to cope, there are services that you can use that sit infront of your side and act like an actual queue.
Nope sorry I can't agree with that. Without saying too much, part of my job is dealing with websites and making sure they stay up, especially under high load.
There are so many ways RDG / ATOC could have avoided yesterdays chaos. As I said above, even just putting a queuing service infront of the site would have been better than doing nothing. Of course they could also have added more servers. I don't know the actual infrastructure they run on, but even physical servers can be set up under a load balancer in a matter of days (and if they are running things in the cloud, then that would have been less than hours!). This wasn't unexpected high traffic. This was something that was incredibly predictable and to be blunt, damn obvious to anyone who was paying attention.
As someone in the industry - to mess up so badly when you knew the traffic was coming (and that the traffic was your own fault by doing such a stupid roll out) - there is zero defence. Now granted, yes you have glitches, or mistakes by people (again can't say too much, but you get things like people forgetting to pay server bills etc) or have things you didn't know about or that you couldn't predict (massive amounts of traffic with no notice beforehand will kill nearly any site). But when you know it is going to happen? Na, RDG / ATOC and the tech team behind the site deserve every bit of criticism they get because of it.
I agree that their handling of the demand was exceedingly poor. However, in their defence, they were not perhaps expecting the media to pick up on the story as much as ended up happening. The articles were at the top of most news channels for half a day or so - not exactly as low-key as they might have hoped/expected.
Agree; perhaps they genuinely didn't expect such a high demand? The launch to sale of the Anglia trial was fairly 'soft' and probably gave them some (in hindsight) false comfort.
Rubbish. It's simply that enough capacity wasn't provided. Your statement is like trying to fit 36tph in both directions on a single track line and then saying "it seems that it's in the nature of railways that there are always delays, no matter who designed them". Yes, there are always going to be small glitches with computer software since it's such a complex thing It's almost impossible to get everything perfect (just as there are plenty of glitches when humans try to deal with railfares since they're so complicated...), but using that as an excuse for what is blatantly an issue of capacity is just rubbish. Computers are not magic.
Finally managed to get one at around 5pm just through hammering the refresh button on my phone while travelling to watch football! Feel sorry for those who had tried thoughout the day and didn't manage to get through to the site though.
To be fair, there wasn't a queue at all! It just seemed pot luck if you go through or not. Obviously their web server just couldn't handle the load so it was just luck if you managed to attempt to load the site at the exact right time someone else had left (so there was a little bit of capacity for it to deal with your request).
And that is partly why it was so poor. Even if you can't be bothered to actually make sure your site has the capacity to cope, there are services that you can use that sit infront of your side and act like an actual queue.
Nope sorry I can't agree with that. Without saying too much, part of my job is dealing with websites and making sure they stay up, especially under high load.
There are so many ways RDG / ATOC could have avoided yesterdays chaos. As I said above, even just putting a queuing service infront of the site would have been better than doing nothing. Of course they could also have added more servers. I don't know the actual infrastructure they run on, but even physical servers can be set up under a load balancer in a matter of days (and if they are running things in the cloud, then that would have been less than hours!). This wasn't unexpected high traffic. This was something that was incredibly predictable and to be blunt, damn obvious to anyone who was paying attention.
As someone in the industry - to mess up so badly when you knew the traffic was coming (and that the traffic was your own fault by doing such a stupid roll out) - there is zero defence. Now granted, yes you have glitches, or mistakes by people (again can't say too much, but you get things like people forgetting to pay server bills etc) or have things you didn't know about or that you couldn't predict (massive amounts of traffic with no notice beforehand will kill nearly any site). But when you know it is going to happen? Na, RDG / ATOC and the tech team behind the site deserve every bit of criticism they get because of it.
There's always somebody who's done some scaling work somewhere who is willing to pronounce judgement without any understanding of the system they are talking about. It's a real problem in the tech industry that everybody seems to think everybody else is an idiot.
Whether they could just spin up more servers, inside the cloud or out of it, is extremely architecture-dependant. It sounds like the basic website mostly stayed up just fine, so the basic mitigations you are suggesting were probably either in place or would not have been that useful. The actual transaction engine was the thing that was on a go-slow, and that's likely to be the hardest element to scale. If the site was built to handle 12x normal volumes in the first place it would probably have been judged too expensive and a cheaper version insisted on, and scaling it could comfortably have taken a month or two.
I would guess that instead of a month or two tech team were given very little more notice of this as the rest of us, with a back-of-the-envelope calculation that said something like "load might be double". They probably didn't see the media plan and didn't necessarily know that anybody was going to publicise the 10k limit and create a rush in the morning.
Added to this, if the transaction engine was scaled it will have been done by sharing a minimal amount of state. That would normally work quite well because various cards haven't got much to do with each other, but suddenly there was a new requirement - the 10k quota. Checking and updating a single counter isn't on the face of it a difficult thing to do, but if the requirement is presented the week before and you don't have a lot of resource then there is essentially a zero chance of doing any load testing on the solution. If a keen junior had chosen the wrong way to do it (row count on a poorly indexed table for example), then that alone could make the whole site fall apart.
Finally, it would hardly be the first time in history when a marketing team have been delighted to see a transaction system go down. The fact of the matter is that most people weren't on the site on the day trying it anyway, so the number of people affected was somewhat limited. The press meanwhile will largely take it as a sign of product popularity rather than incompetence, so you get some great coverage out of it.
Hell even music ticketing sites have sorted their game out now
I think they caused the problem by putting a numbers cap on it. If they had instead said you could purchase one for two days, some people would have avoided the rush by purchasing overnight etc.
FWIW this issue occurs in many places where you'd think it wouldn't - See Tickets is notably bad, while Ticketmaster does have a proper queueing system.
Some very much have not. The presence of a very effective queueing and ticket holding system is why, if it's an option, I always use Ticketmaster.
The reason I actually mentioned that is because at least recently, See Tickets have been really good for me. Even gigs that had a large demand, their queuing system worked flawlessly. I know in the past they were awful for it, but I've not had issues with them in a long time now.
The reason I actually mentioned that is because at least recently, See Tickets have been really good for me. Even gigs that had a large demand, their queuing system worked flawlessly. I know in the past they were awful for it, but I've not had issues with them in a long time now.
But agreed on them making a rod for their own back with the rollout plan.