How to cheat at Pokemon Go!

One of the things I find myself getting asked reasonably often is whether it’s possible to cheat at Pokemon Go!

The answer is, yes of course it is… It does require installing a hacked version of the app, which (depending on your phone’s operating system) may require installing third-party enterprise certificates and trusting the applications to run. This guide will show screenshots for iOS, however one imagines that Android will work a similar way:

  1. On your phone, using the Safari browser, navigate to http://www.guopan.cn/ and search for Pokemon Go!
    IMG_0573
  2. Tap the green install button and accept the security warnings, allowing the installation of the application and the store certificate
  3. After the application has installed, open Settings and browse to General > Profiles & Device Management
    IMG_0575
  4. Click the installed profile for the store, and click to trust the certificate; the cracked application will not start until this trust is in place
    IMG_0574
  5. After this, you can launch the application (it will install what will appear to be a second copy of Pokemon Go! so ensure you keep them separate so you know which is which)
  6. After launching, enter your birthday and sign in with your normal Pokemon Go! account details and configure your notification and AR settings as normal
  7. While the game is starting, click the rocket icon in the top-left corner to open the location options
    IMG_0577
  8. The first 5 options are various locations in China (I’m afraid I don’t read Chinese, so I don’t know what these are). Option 6 will cause the application to use your current real-world location as provided by your GPS sensors, while the final option will allow you to enter coordinates for any location you like. These can be obtained by dropping a pin on Google Maps and copying the coordinates:
    santa-monica-pier
  9. After choosing your option, tap the blue button to enter the game:
    IMG_0578

You can move your character around using the directional buttons in the top left. Tapping the little man symbol will allow you to change your movement speed:

  1. Man – average walking speed
  2. Bike – average cycling speed
  3. Car – average driving speed
  4. Plane – average aeroplane speed

Be careful about moving too quickly, as travelling too fast will cause the game to not count the distance moved towards your egg hatching and your buddy’s candy. The buttons added by the cracked file can moved by tapping and holding, then dragging. The rest of the game will play exactly the same as the normal game: gyms can be taken or reinforced, Pokestops can be swiped and Pokemon captured and evolved and eggs hatched exactly as normal. The only part of the hack that I haven’t seen working is the in-game store, so you’ll need to use your real version for any in-game purchases.

Again, be careful when you’re using the rocket options: Niantic may not know when you’re using this cracked version of the game, but they will know if you’re moving too quickly. Its not possible to travel from London to Adelaide in 10 minutes, so if you do that in the hacked version, you’re likely to get your account banned. Spoof your location sensibly, people!

Finally, when Niantic release an update to the genuine game, you may find that the cracked version stops working and when launched, prompts for an update. This happens because Niantic will have made running the most current version mandatory. You’ll need to wait until a new cracked version appears on the store, then re-install it. Similarly, you may need to delete and re-install the crack if the store certificate chain changes, as this will invalidate the trust.

Gotta catch em all!

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Escape to the Movies: Also Showing

I’m holding off on writing my review of Star Wars: The Last Jedi for the moment. Partially to let everyone see it and avoid giving away any spoilers, but mostly because I’m lazy. Short version, though: its brilliant and you should go see it.

In the meantime, I thought I’d share these trailers for upcoming films that look quite interesting…

First up, Mortal Engines:


London must feed…

Next, The Greatest Showman:


No one ever made a difference by being like everyone else.

For the comic book fans, Black Panther looks pretty epic:

And, of course, you can’t beat some dinosaur goodness. Jurassic World: Fallen Kingdom has that covered:

And finally, if you like your Manga adaptations, Alita: Battle Angel is the one for you:


May you stay in the arms of the angels…

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

DPM2016 CU4 Fix

I’ve noted in other recent posts on DPM that we have been experiencing issues with the latest release of Data Protection Manager 2016 CU4. The main issue has been that when the overnight maintenance jobs run to remove expired replicas from storage, the job causes a service crash when trying to remove some disk/volume backups. SQL and Exchange data don’t cause this issue. I had raised a prem support call with Microsoft about this, and they have advised that it is a bug in 2016 CU4 which can be worked around by running this script against your DPMDB database on the SQL server. This alters the existing prc_RM_ReplicaDataset_GetValidDatasetCountOnPhysicalReplica stored procedure in the database and replaces it with a new version that accepts IncludeDatasetsWithoutSC with a default value of 1 in addition to the DatasourceId and ReplicaId GUIDs.

I expect that Microsoft will roll this fix up into a hotfix for DPM, and it will certainly be resolved in the next update release for the product. As ever, I wouldn’t recommend running this or otherwise altering the database if you’re not having this specific problem. You will be able to identify if you are as your DPM instance will crash overnight during its tidy-up processes; any manual call to the inbuilt PruneShadowCopiesDPM.ps1 script will also cause a crash when it tries to delete an expired disk/volume backup, your server will retain recovery points beyond their retention period for those data sources that haven’t been successfully pruned, and if a crash occurs, you will see the following in the application log on the DPM server itself:

Procedure or function prc_RM_ReplicaDataset_GetValidDatasetCountOnPhysicalReplica has too many arguments specified.

Hope this helps anyone out there having these problems: as ever, please let me know any thoughts, comments or suggestions.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Pruning DPM Replicas

There are times when, as a DPM administrator, you might want to prune disk-based replicas of data sources, either to save space or because of another reason entirely. As we’re currently experiencing an issue with DPM 2016 CU4 whereby replicas of filesystem data (disks/volumes/directories/shares) aren’t being pruned (the maintenance task calls a SQL procedure with too many arguments when removing these replicas resulting in a crash) I wrote this script to help delete replicas of other types of data that weren’t being cleared up because of these crashes:

Clear-Host
Import-Module DataProtectionManager

$DPMServerName = Read-Host "Enter DPM Server Name"
$Dte = Read-Host "Enter cutoff date (mm/dd/yyyy)"
$pg = Get-ProtectionGroup -DPMServerName $DPMServerName

Write-Host "Available Protection Groups:" -foreground "magenta"
for($i=0; $i-le $pg.length-1; $i++)
{
Write-host "$i - $($pg[$i].FriendlyName)"
}

$ProtectionGroup = Read-Host "Enter the number of the group to process"
$ds = Get-Datasource -ProtectionGroup $pg[$ProtectionGroup]

Clear-Host
Write-Host "Protection Group Data Sources:" -foreground "magenta"

for($i=0; $i-le $ds.length-1; $i++)
{
Write-Host "$i - $($ds[$i].Name) on $($ds[$i].Computer)"
}
Write-Host "A - Process all datasources"

$Datasource = Read-Host "Enter the Datasource ID"
if($Datasource -ne "A")
{
$recoverypoints = Get-RecoveryPoint -Datasource $ds[$Datasource]
}
else
{
$recoverypoints = @()
foreach($source in $ds)
{
$recoverypoints += Get-RecoveryPoint -Datasource $source
}
}

if($Datasource -ne "A")
{
Write-Host "Most recent recovery point: $($recoverypoints[$recoverypoints.count-1].representedpointintime)"
}
clear-host
foreach($rp in $recoverypoints)
{
$rpsize = [math]::truncate($rp.Size / 1GB)
$line = "$($rp.datasource) - $($rp.representedpointintime) ($($rpsize)GB)"
if($rp.representedpointintime -lt $Dte)
{
write-host "Removing $($line)..."
Remove-RecoveryPoint $rp -Confirm:$false -ErrorAction SilentlyContinue
}
else
{
write-host "$($line) later than cutoff, skipping..."
}
}

Simply save it to a .ps1 file on your DPM server and launch it from the DPM Management Shell. When run it will ask for the name of the DPM server to connect to, and a cut-off date. Replicas on storage earlier than the cut-off will be removed. Next the script will show all your protection groups and ask to choose one. After this will show you all the data sources in that protection group, giving you the option of processing just one item, or all items in the group.

Hope this script is helpful. Any comments or suggestions, please feel free to let me know!

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Thought for the day

Shamelessly stolen from the Core Rulebook of the Elite: Dangerous RPG:

… when you’re out there in your ships, a canopy of stars above you, and not a living thing for a million miles all around, the light of a purple dwarf star glinting off the hull plates, and an unexplored galaxy stretching out in front of you … then you’ll know what it means to be alive.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Off-boarding Exchange Online users who have never had an on-prem mailbox

Wow. Sorry for the long title on this one folks, but allow me to continue.

As regular readers of my occasional tech-tips posts will know, our organisation exists in a hybrid relationship with Exchange Online. Some users are in the cloud, some are on-premises as we migrate them. Occasionally a user will need to be off-boarded from the cloud back to Exchange on-premises. This is easy enough if the user was migrate from Exchange on-premises in the first place, but what about the scenario where the user’s mailbox was created directly in the cloud? There’s no on-prem mailbox to go back to, so you might think it would be a painstaking process of PST exports, removal of licenses, being mail-enabled on-prem and then PST imports… Well, you can do it that way, or you can try the following:

  1. Log into Exchange Online PowerShell and run the following command to get the mailbox GUID:
    Get-Mailbox user@domain.com | Select ExchangeGuid
  2. Remove all the dashes from the GUID
  3. Generate a new value for the user’s legacyExchangeDN attribute. You can copy the value of an existing on-premises user, just change the CN= value at the end to the user’s sAMAccountName or name value, e.g.:
    /o=First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=User1

Once you have these values, you can run the following two PowerShell commands after importing the Active Directory module to set the values:

Set-ADUser user1 –Replace @{msExchMailboxGuid=[GUID]'d2cd81e6be4a46b48bc6655cfaad86d9'}

Set-ADUser user1 –Replace @{legacyExchangeDN="/o=First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=User1"}

Obviously, you’ll need to substitute the values in the examples for msExchMailboxGuid and legacyExchangeDN for the real values you generate for your own users. Once the necessary syncs have taken place and the object is up to date, you will be able to create an off-boarding migration batch to move the cloud mailbox back on-premises.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Large DPM databases

Had an on-going issue recently where one of our DPM servers seems to crash over night. Eventually I found an error in the log stating that the DPM server had lost connectivity to the DPMDB database in the SQL instance. I thought this was a little odd because we have three DPM servers which all connect to instances on the same SQL server, and the other two servers had been fine.

A cursory check of the SQL server revealed nothing much, so I started looking at what DPM might have been doing. DPM has a number of housekeeping tasks that it runs to keep its database tables in order, and these run between midnight and 3am depending on the server load and how much each task has to do. It was between these times that we were seeing the crashes, so I started to check the jobs and the tables they look at.

It was quickly apparent that the table tbl_TE_TaskTrail was enormous, occupying nearly 16GB of the database. This seemed strange as one of those overnight processes is meant to clear out any records from this table that have a StoppedDateTime value older than 33 days ago, essentially meaning the table should only hold a little over a month’s worth of data. A quick check of the table running the SQL statement below showed we had a problem with this task:

SELECT TOP 100 * FROM dbo.tbl_TE_TaskTrail ORDER BY CreatedDateTime;

QueryForOldRecords-Cropped

If the clear-up jobs had been running successfully, we shouldn’t have records of jobs going back to 2015… So after discovering this, I ran the following to find out how many rows were older than the prescribed 33 days:

SELECT COUNT(*) FROM dbo.tbl_TE_TaskTrail WHERE StoppedDateTime < (GETDATE() - 33);

It returned a count of nearly 5 million rows… Hurp. No wonder the query was timing out!

So, how to clear this table down and allow DPM to continue on without impacting performance during the day or relying on the scheduled maintenance tasks, which demonstrably hadn’t been running correctly. Well, you can run the following SQL block against your DPMDB which will process the table in blocks of 50,000 at a time. It will delete rows associated with the tasks in the TaskTrail database first, before finally clearing the records from TaskTrail. Some of the tables might not have associated records in them, so don’t worry if some of the delete statements return results to the effect that they haven’t removed any rows, this is normal.

USE DPMDB_MY_DPM_DB -- Change this to the name of your DPM database
GO

DECLARE @GCTill DATETIME = GETUTCDATE() - 33
DECLARE @TempTable TABLE (TID GUID)
INSERT INTO @TempTable (TID) (SELECT TOP 50000 TaskID FROM dbo.tbl_TE_TaskTrail WHERE StoppedDateTime < @GCTill AND dbo.tbl_TE_TaskTrail.TaskID NOT IN (SELECT taskID FROM tbl_AM_AgentTask_Alerts) AND dbo.tbl_TE_TaskTrail.TaskId NOT IN (SELECT taskID FROM tbl_MM_MediaRequiredAlert))

DELETE FROM dbo.tbl_RM_RecoveryTrail_RecoverableObjects WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_AM_AgentDeploymentTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_ARM_TaskTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_CM_InquiryResult WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_MM_MediaRequiredAlert WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_MM_Task WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_MM_TaskTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_PRM_CloudRecoveryPointTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_PRM_ReferencedTaskTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_RM_CandidateDatasetsForSCAssociation WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_RM_RecoveryTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_RM_ReplicaTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_RM_ShadowCopyTrail WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_TE_TaskError WHERE TaskID IN (SELECT TID FROM @TempTable)
DELETE FROM dbo.tbl_TE_TaskTrail WHERE TaskID IN (SELECT TID FROM @TempTable)

So, how long does this take to run? I find on my SQL server (which also hosts two other DPM instances) I was processing 1000 records a minute, so a 50,000 record block would take five minutes to process. If your server is beefier, you can increase the number in the INSERT INTO statement to process more records at a time.

Hope this helps everyone out there.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Exchange 2013 > Exchange 2007 mail relay issues

Had an interesting issue come to light in our Exchange environment recently. Before I go into it, I’ll give a bit of background on how we’re set up…

Most of our organisation has mailboxes in an Exchange 2007 environment. There are some users and shared mailboxes in Exchange 2013, and this environment is in a hybrid relationship with Exchange Online while we’re migrating to Office 365.

Mail from Exchange 2007 to Exchange 2013 and back is handled by internal transport services. Mail to and from Exchange Online is handled by a set of scoped connectors on the Exchange 2013 side of the organisation. All mail heading into or out of the organisation is relayed through a mail gateway which communicates with the transport services on Exchange 2013.

What started out as an investigation into why our text messaging system was not dealing with emails that should have been sent out as text messages led to the discovery that hundreds of emails had queued on the Exchange 2013 side of the organisation and were not being relayed to Exchange 2007.

The usual tricks of using PowerShell to force-resume the message queues or restarting the transport services did nothing, however we did notice some TLS errors in the event viewer stating that the Exchange 2013 servers could not create a TLS connection with the Exchange 2007 servers. On further investigation, we determined that this was due to a malformed exchange of trusted certificate authorities between the servers. Essentially, because the certificate chains were too long, they were being truncated by the protocol (this behaviour is by design, apparently) and the servers weren’t able to establish an authority to use for mutual trust.

The exact text of the error you’ll see in the log is:

451 4.4.0 Primary target IP address responded with: “421 4.4.2 Connection dropped due to SocketError.” Attempted to failover to alternate host, but that did not succeed. Either there are no alternate hosts, or delivery failed to all alternate hosts.

There are two main ways to fix this error if you encounter it:

  1. Delete some certificates from the computer account’s trusted root CA store. Be careful to only remove CA certificates that have expired or are otherwise invalid as deleting an incorrect certificate can cause serious issues with secure communications.
  2. Create the following registry key on the Exchange 2007 transport servers: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\SendTrustedIssuerList. This should be a DWORD with a value of 0. Once done, restart the Exchange Transport services.

Both options have advantages: Option 1 continues to allow mutual TLS between the servers, however has the risk that you might remove certificates that are needed for other services. Option 2 has the least risk but not exchanging a root CA list has implications for mutual TLS. This Microsoft article describes the issues… Quite why this issue chose to raise its head now when our Exchange co-existence environment has been configured for several years is anyone’s guess…

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

Exchange Online migrations and calendar permissions

We’re in the middle of migrating our on-premises Exchange organisation to Exchange Online. You see some odd things when you’re running a hybrid Exchange environment, but one of the oddest I’ve seen so far is a user being denied permission to edit or remove items in their own calendar after being migrated. Trying in Outlook returned an error saying they didn’t have permission to send as the specified user (themselves…) and OWA looked like it had deleted the appointment until the view refreshed, after which it came back.

Turns out the problem is due to the LegacyExchangeDN attribute value on the user object changing when they migrate to Exchange Online. The original value of this attribute is meant to be added as an additional entry on the proxyAddresses attribute when migration completes, but it seems this doesn’t always happen, and this error is the result.

To fix it, you need to re-create the value of the LegacyExchangeDN attribute as it would have been prior to migration, and add it to proxyAddresses manually.

To do this, you’ll need to find another user that the user having issues sent an email to prior to being migrated. If that user then replies to the email, they will receive an NDR saying the message to “IMCEAEX-_O=…” failed. This IMCEAEX address can be converted into the missing X500 address by doing the following:

  • Delete the “IMCEAEX-” string from the start
  • Delete the “@domain.com” string from the end
  • Add “X500:” at the start

Replace special characters by using find and replace:

Old Value New Value
_ (underscore) /
+20 space
+26 &
+28 (
+29 )
+40 @
+2E .
+2C ,
+5F – (hyphen)

Once you have the fully reconstructed X500 address, add it into the proxyAddresses attribute for the user having the issues using ADSIEdit and wait for DirSync / ADFS to update the Cloud object for the user. After this, the issues should resolve themselves.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email

DPM, Previous Versions and Terminal Services

We use Data Protection Manager to allow end-users to perform self-service recovery on their own homes drives by using previous versions. This integrates with Active Directory to present the replica shares directly from the DPM server which performs the backup.

It seems that the functionality of this tab is disabled by default on remote desktop (RDP) servers, which means that users on such servers – most of ours as we operate an extensive remote desktop environment – didn’t have this functionality.

In order to enable this functionality you’ll need to create a DWORD registry key called “EnableDLSPreviousVersionsOnTS” in the following location:

HKLM\Software\Microsoft\Windows\CurrentVersion\Explorer

Once done, reboot the server at a suitable time and the functionality will be re-enabled for users on remote desktop servers.

Share and Enjoy:
  • Digg
  • StumbleUpon
  • Technorati
  • del.icio.us
  • Twitter
  • blogmarks
  • HackerNews
  • Tumblr
  • Posterous
  • email