Monday, 11 December 2017

How to track your bitcoin transaction from @hashflare

Hello!

I'm pretty new to Bitcoin hype and I don't consider myself an expert in this area. Like most of you reading this blog post, you have probably invested some money in cloud mining at hashflare.io. But the recent problem of slow Bitcoin withdrawals from Hashflare, not really related to Hashflare at all but to Bitcoin network congestion (see here), has made me to use my experience as a long time system engineer to try and find out when will my transaction be confirmed. 

I have created a withdrawal request on December 6th and till this day (December 11th), my transaction is still unconfirmed (see here).

According to Hashflare, they use block.io to broadcast the transaction (withdrawals) to Bitcoin network. You can follow the status of your transaction at chain.so. But if you try to find your transaction on some web pages such as blockchain.info, there is no transaction. The problem is that every transaction has a reference transaction, or receive_from txid. If this transaction has not been confirmed too, then no transaction that references this transaction can be broadcasted to the Bitcoin network. Receive_from transactions must be written to block (confirmed) so that Bitcoin network nodes will accept new transactions that reference other confirmed transactions for confirmations.

To track confirmations of referenced transactions I have wrote a script to see how long is the "chain of transactions" that are being referenced in each transaction. It's a simple powershell script that calls REST API interface of chain.so. Just type in your txid and run the script. The small problem with the REST API is that it can only be called 10 times during 60 seconds. So if your transaction chain is really long, couple of thousands of transactions, it can take a couple of hours. The script runs until it reaches the last confirmed transaction in the chain and outputs the referenced transaction count as well as txids of all referenced transactions.



Here is the script. Just save it in a file, something like 'track transaction chain.ps1', change the $txid variable at the bottom of the script and run the file.

function AnalyzeInput($txid)
{
    $txdata=(Invoke-RestMethod -Uri https://chain.so/api/v2/tx/BTC/$txid).data
    if($txdata.confirmations -eq 0)
    {
        Write-Host "Transaction ID:$($txdata.txid)"
        Write-Host "Confirmations:$($txdata.confirmations) Count=$counter Inputs=$(($txdata.inputs).Count))"
        $i=0
        foreach($input in ($txdata).inputs)
        {
            $txid=($txdata).inputs[$i].received_from.txid
            $i++
            $counter=$counter+1
            Start-Sleep 6
            AnalyzeInput($txid)
        }
    }
    else
    {
        Write-Host "Transaction ID:$($txdata.txid)" -ForegroundColor Green
        Write-Host "Confirmations:$($txdata.confirmations)" -ForegroundColor Green
    }
}

$txid="ccc958d745ddbd7cd10ba6a38454979aef2e61b0dda398e9ceff617c0631980a"
$counter=0
AnalyzeInput $txid

Happy mining! :)

Tuesday, 5 December 2017

ADFS 2016 and TLS 1.2 - The underlying connection was closed: An expected error occurred on a receive

Just recently I have encountered a problem when trying to federate domain between Azure Active Directory and on-premises ADFS 2016. When running Azure AD Connect wizard you receive an error that it is not possible to convert domain to Federated. If you try to manually run Convert-MsolDomainToFederated from Azure AD Connect machine you receive this error message:

"The underlying connection was closed: An expected error occurred on a receive."

The issue happens if you have turned off TLS 1.0 on your ADFS servers. After you reenable TLS 1.0, configuration wizard runs smoothly. However, if your TLS 1.0 must be turned off because you are working in a highly secure financial institution, you probably followed the steps in this article on how to turn on TLS 1.2 on ADFS 2016 servers:

Managing SSL/TLS Protocols and Cipher Suites for AD FS
https://docs.microsoft.com/en-us/windows-server/identity/ad-fs/operations/manage-ssl-protocols-in-ad-fs

The problem is that if you turn on TLS 1.2 on ADFS servers, you also need to turn it on on Azure AD Connect:

Enable TLS 1.2 for Azure AD Connect
https://docs.microsoft.com/en-us/azure/active-directory/connect/active-directory-aadconnect-prerequisites#enable-tls-12-for-azure-ad-connect

You can just follow the step to edit the registry setting on Azure AD Connect machine:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\.NETFramework\v4.0.30319 "SchUseStrongCrypto"=dword:00000001

But the idea is to have a secure environment using only TLS 1.2.

Similar issue can occur between your WAP server and ADFS and here is an article on how to solve it:

Error While Configuring WAP–The Underlying Connection Was Closed
https://blogs.technet.microsoft.com/keithab/2015/04/13/error-while-configuring-wapthe-underlying-connection-was-closed/

Monday, 25 July 2016

Intune and SCCM behind forward and reverse proxies


Last week I spent many hours trying to troubleshoot Intune with SCCM as a management point. The infrastructure in subject contains NDES server which should request certificates from on premise CA and push them do devices. One of the customer requests was that the SCCM server should connect to manage.microsoft.com only through their forward proxy. The other request was that NDES should be published through their reverse proxy.

Everything went fine initially, the devices enrolled successfully and applications installed correctly. However, we could not get certificates to install. At one point, certificates would get issued and visible in Certificate Authority but never pushed to the device. At other times, we would receive this error in CRP.log on SCCM:

Key usage in CSR: "160" and challenge: "224" do not match for CertRequestId: ModelName=ScopeId_4599798C-283A-4BB1-9860-0968EE55B8A9/ConfigurationPolicy_b97bbe55-4fcb-4f25-8516-208eb5945ee8;Version=1;Hash=1524076804 CertificateRegistrationPoint 7/11/2016 3:24:39 PM 12 (0x000C)

I tracked the issue down to caching reverse and forward proxies. When the request from device reached NDES public endpoint, it would forward the cached request to NDES and not the actual one. Also, regarding forward proxy, SCCM pushes and pulls the data from manage.microsoft.com. It seems that it received incorrect data from forward caching proxy and that is why we got Key usage mismatch although we were positive that Key usages were correctly set in the certificate template and SCEP policy.

Anyway, we have turned of reverse proxy caching for NDES server and forward proxy caching for SCCM Site Server role and everything works perfect now.

Wednesday, 4 November 2015

MessageRateLimit on Exchange 2013 Receive Connector


A couple of days ago a customer contacted me saying that he had the following issue when sending authenticated SMTP e-mail messages from an external host: 

4.4.2 Message submission rate for this client has exceeded the configured limit.

Some of the messages would pass through okay, but the majority of them would receive the error. The first thing he checked was MessageRateLimit property on the Receive Connector on the Exchange 2013 CAS server which was configured like this:

 

 However, he did not check the Receive Connector on the backend Exchange 2013 Mailbox servers.

This is how it was configured:

[PS] C:\Windows\system32>Get-ReceiveConnector -Server EXMBX01 | select identity,*messageratelimit,bindings | ft -AutoSize

Identity                                              MessageRateLimit Bindings
--------                                              ---------------- --------
EXMBX01\Receive Connector - Mailbox Transport Service Unlimited        {[::]:25, 0.0.0.0:25}
EXMBX01\Receive Connector - From CAS Proxy Client     5                {[::]:465, 0.0.0.0:465}

 The MessageRateLimit here is set to 5 which is clearly an issue if the host needs to send more than 5 messages per minute. We configured this to 100 on each of the Exchange 2013 Mailbox servers:

[PS] C:\Windows\system32>Set-ReceiveConnector -Identity "EXMBX01\Receive Connector - From CAS Proxy Client" -MessageRateLimit 100 

[PS] C:\Windows\system32>Get-ReceiveConnector -Server EXMBX01 | select identity,*messageratelimit,bindings | ft -AutoSize

Identity                                              MessageRateLimit Bindings
--------                                              ---------------- --------
EXMBX01\Receive Connector - Mailbox Transport Service Unlimited        {[::]:25, 0.0.0.0:25}
EXMBX01\Receive Connector - From CAS Proxy Client     100              {[::]:465, 0.0.0.0:465}


And the problem went away.

When the mail flows from the host on the Internet it reaches SMTP Receive Connector on CAS 2013 on port 587 which is used only for authenticated SMTP traffic. CAS 2013 role proxies the request to Receive Connector on Exchange 2013 Mailbox server on port 465 to deliver the mail. That is the reason why we actually need to configure MessageRateLimit on the backend Mailbox and not on the frontend CAS server.