Quantcast
Channel: msexchangequery
Viewing all 199 articles
Browse latest View live

Configure DKIM and DMARC in on premise Exchange Environment

$
0
0

Small history on DKIM:

Cisco’s Identified Internet Mail (IIM) and Yahoo’s DomainKeys were merged and formed the DomainKeys Identified Mail (DKIM) in the year 2004, an IETF standard described in RFC 6376.

IIM and Domain keys is no longer supported by any RFC standards and they are depreceated.
These both systems were combined together as DKIM which is widely being used currently.

By using SPF we are actually letting everyone know that these are the authorized IP’s for sending emails.
But but few suggest they aren’t as secure and there are chances these authorized servers on SPF list can be compromised and spoofed messages can be sent.
DKIM is a process through which the recipient domain can validate and ensure that the messages are originated from the actual domain sender and was not spoofed message.

How DKIM Works ?

DKIM involves 2 processes signing and verifying. Signing from the sender who has this feature enabled and can be from a module Mail Transfer Agent.
By default Exchange server does not have this option to sign for emails with DKIM.
We need to have a MTA agent to perform this job on the Exchange server or the best way is to enable this feature for signing out all emails through an SMTP gateway for an on premise setup.
Almost every SMTP gateway in the market is having this option to enable DKIM and DMARC.
When performing this operation on sender organization who has this feature enabled for outgoing emails it inserts hash tag of the DKIM signature content header fields , body fields for the author organization.

The verifying is done by the receiving part domain if the DKIM is configured in that recipient domain. If at all there is no DKIM configured no DKIM verification will be performed on the receiver and the mails will be routed normally to the recipient.
The receiving SMTP server uses the domain name and the selector to perform a DNS lookup

We can rotate the keys randomly from the smtp gateway or from the application which is doing the job if at all we have a doubt if the private key is compromised.
In this case we need to change the selector name accordingly in the DNS for DKIM to reflect the new selector having the new private key.

The above scenario is very very rare and if it happens anyone will be able to get a copy of your private keys, they will be able to sign messages on your behalf.

The private key will be present on the MTA agent with the domain owner itself which performs this job and the public key will be published as a DNS text records.
By using this DNS published text records it allows anyone to verify that the signature(hash tag) present in the received email is valid and no contents in the email have been tampered.

 

Below are the core components with which the DKIM will be functional :

Selector (S) – Its usually the SMTP server which has the key pair certificate (private key usually SMTP server)
We can have multiple selectors if we have multiple SMTP servers
Or we can use the same key pair on all the SMTP servers which is best because we don’t need to publish multiple DNS records for multiple selectors.

_domainkey – Static fixed part of the protocol itself and can’t be altered.

d(Signing Domain) – This part needs to be verified so it should be our domain name.

p(Public-key data) – This portion contains the public key of our generated cert request in base encoding.It should be definitely base64 encoding format.

Once the DKIM domain records is created we need to append the TXT record in the DNS records for the newly created subdomain with the public key generated from the DKIM responsible server(selector).

Below are the additional components which can be added if required:

v –  is the version.
a –  is the signing algorithm.
c – is the canonicalization algorithm(s) for header and body.
q –  is the default query method.
l  –  is the length of the canonicalized part of the body that has been signed.
t  –  is the signature timestamp.
x  – is its expire time.
h –  is the list of signed header fields, repeated for fields that occur multiple times.

Below is the overall steps:

1) Create your signing key in the agent or server responsible for this job in your environment.
2) Publish your DKIM DNS record for your domain.
3) Enable the DKIM signing and encrypting option for all outbound emails.

Below is the standard DKIM configuration through  SMTP server MTA Agent:

DKIMimage

Benefits of DKIM:

1) DKIM will add positive points to the antispam in terms of SCL rating for our internet emails.
2) There is no possibility of Spoofed emails going on behalf of our domain if we have SPF and DKIM together.

If we have multiple SMTP Gateways do we need to have multiple selectors ?

In this case we can use the same key and profile on all SMTP Gateways.So we create a domain profile on the first Gateway as well as the signing key and publish the TXT record. Its better to have only one TXT record for a domain. The same keys generated on one SMTP GW can be used on all of the Gw’s we have. Just we can import them on all gateways. By doing this we don’t need to create multiple txt entries for the respective selectors.

After this Export Public Key and add the TXT entries in the public DNS server.
So basically a DKIM enabled org will have all the sent emails stamped with a hash tag with the private key from the DKIM MTA agent or the SMTP Gateway.
The recipient domain will perform the DKIM validator if it does by querying the DKIM text records.
The recepient domain will consider this domain valid only when the sender email has the hash tag.Basically this is a key pair.

DMARC : Domain-based Messaging, Authentication, Reporting and Conformance (DMARC) standard

DMARC is a mechanism for domains to get reports on DKIM and SPF results for our domain if we have them configured.They let us know what to do if the SPF or DKIM fails for our domain.

A DMARC policy applies clear instructions for the message receiver to follow if an email does not pass SPF or DKIM authentication—for instance, reject or junk it which we  can configure according to our requirement.
DMARC sends a report back to the sender about messages that PASS and/or FAIL DMARC evaluation.

Through DMARC, we can receive all the forensic reports sent on behalf of our domain daily.

We need to Designate the email account(s) where we want to receive these reports and all the reports will be sent to this email address.

This DMARC again requires a DMARC tag that will be inserted on all outgoing emails which are with SPF and DKIM. So we are letting the receiver to verify this DMARC tag.
DMARC tags are the language of the DMARC standard.

Below are the important required tags for DMARC:

v: Version – This tag is used to identify the TXT record as a DMARC record and is static value as is.

p: Requested Mail Receiver Policy.
Again this P can be any of these 3 values

p=none: No specific action will be taken on emails that fails in DMARC validation.
p=quarantine: By doing this we are requesting the receiver end to place the email in the spam/junk folder and mark them as suspicious.
p=reject: By doing this the domain owner says strictly reject all emails that fails DMARC validation on the receiver end.
This is the best recommended way and it provides a highest level of protection.
rua: Indicates where aggregate DMARC reports should be sent to.
Senders designate the destination address in the following format: rua=mailto:domain@example.com.

fo: Dictates what type of authentication and/or alignment vulnerabilities are reported back to the Domain Owner.
pct:We are specifying this value to the percentage of messages to which the DMARC needs to be applied for all the outgoing messages.
This can be optional and can be used to test the impact of the DMARC policy at the initial stage and later can be removed or kept 100.

Below is an example of the DMARC record of how it should be created with the above required tags:

v=DMARC1; p=reject; fo=1; rua=mailto:domain@example.com; rf=afrf; pct=100

The above method of creating a txt record is the DMARC standard.
Also we need to specify the email address where the reports should be sent.
We also need to inform ISP’s to send all the messages to the specified email address and not to block as a spam or reject them for any reason.

Important points to be considered while enabling DKIM:

1) DKIM verification is automatically verified for all messages sent over IPv6 communications if the recipient domain has DKIM verifier enabled.
2) This DMARC is again configurable in on-premise only if your SMTP Gateway is having this feature.
3) DKIM performs Cryptographic checksums on every outbound messsages sent externally.This increases the protocol load overhead on the outgoing emails and more memory system resources will be consumed to perform this operation.
4)DKIM is an IETF Draft Standard, and it is free of cost no need to pay anything for your ISP because all we need is the DKIM public key text entries.
5) If the receiver domain does not have this DKIM verifier configured all the emails sent with DKIM enabled will be received normally and there will not be any issues.

Thanks & Regards
Sathish Veerapandian 
MVP – Office Servers & Services 



Quick Bites- Known issue with Security Update for Exchange 2016 CU2 KB3184736

$
0
0

Its been more than a week that Microsoft released Security update for Exchange 2016 CU2

The Security update can be downloaded from the location https://support.microsoft.com/en-us/kb/3184736

Yesterday we installed the KB3184736 on Exchange Server 2016 CU2 production.

We have run into the below 2 issues:

Just posting them here so that people can look into these 2 issues after the update and rectify them if they  experience the same:

1) Microsoft Search Host Controller would go disabled – So started the service ran Update-MailboxDatabaseCopy -CatalogOnly for the indexes to reseed which resolved.

2) Got ASP.Net runtime error for ECP – But strange out of all installed servers only 3 servers ECP were affected and rest all was fine.
On comparing the web config found that the ECP BinSearchFolders were showing as %ExchangeInstallDir% instead of C:\Program Files\Microsoft\Exchange Server\V15\
Changed the path location to C:\Program Files\Microsoft\Exchange Server\V15\ which solved the issue.

3) Few OWA users were getting the below message bad request , unable to login to the OWA page and the message appeared as below with the blank white screen with bad request.

ev1

Ran the UpdateCAs.PS1 script on all mailbox servers found on the location  C:\Program Files\Microsoft\Exchange Server\v15\bin\UpdateCas.ps1 after which the issue was resolved.

ev2

 


Recertify expired Notes ID

$
0
0

Recently few of the lotus notes users were getting the below message on logging to their notes account.

One or more certificate in your notes ID have expired.
Contact your domino administrator.

notes

By looking into this error we really think that this is something to do with the certificate.
This occurs because user ID’s expiration dates are mentioned for each account on the domino server and after expiration these messages appear.
Usually the values are mentioned as 10 years period or values accordingly set by domino developer during the deployment.
This helps the administrators not to recertify the ID’s frequently.

So basically what we need to do is to extend the expiration dates for these users on their notes ID when we come across this issue.
Inorder to extend the expiration time we need to recertify those ID’s.

The below steps can be performed to recertify the notes ID

Launch the Domino Administrator :

Navigate to People and Groups

domino

Navigate to tools – Select people – and select recertify

notes1

The next step will be prompted for a certifier process.

Here we have 2 options:

1)Supply certifier ID and password
2)Use the CA process

Its better to use the CA process which will allow us to specify a certifier of our own without access to the certifier ID file or the password.

After choosing the above option we will get the below screen of the new certificate expiration date. There is an option to inspect each entry before submitting a request which is good to enable.

notes2

After a successful processing we get the below message which says the request statistics.

notes3

After this dialog box click ok and continue. After the replication interval the user can login and he will not get the certificate expiration message anymore.

Thanks & Regards 
Sathish Veerapandian
MVP- Office Servers & Services


Load Balancing Edge services over internet for Skype for Business

$
0
0

In-order for the users to connect externally from the organization’s network we need to publish the Skype for business services.In this article we will have a look at best ways to publish the Skype for Business Edge servers over the internet.
By doing this the users can participate from external N\W in IM,AV ,web conferencing sessions.

There is lot of confusion in the architectural part of load balancing the Skype for Business Edge servers and cannot be taken as easy deployment. If the SFB deployment is extended to communicate with federated partners, remote connected users and Public Instant Messaging users then a real proper planning of the edge servers deployment needs to be carried over.

If we have 2 or more edge servers deployed in the DMZ they need to be load balanced to equally distribute the load in all the edge interfaces.
In general Microsoft recommends to use a DNS Load Balancer for Edge High Availability.

Load balancing distributes the traffic among the servers in a pool so that the services are provided without any delay.

Below are 3 types of load balancing solution that we can use based on our requirement:

DNS Load Balancer Using NAT :

This is the best recommended approach.
We are actually load balancing each edge services namespace over the internet with multiple A records NATTING them via firewall and then to Edge servers.
These Ip addresses are bound to each services seperately routed to internal individual Ip’s assigned to the external NIC.
Three private IP addresses are assigned to this network adapter, for example 131.107.155.10 for Access Edge service, 131.107.155.20 for Web Conferencing Edge service, 131.107.155.30 for A/V Edge service. These private Ip’s listen individual public IPs Natted from the f/w.
These Ips are not participated in the load balancer and used only for NATing.
They are basically behind a port forwarding firewall which is good.

Advantages of doing this:

1) We are assigning a separate public IP’s for each service and using standard ports. So the remote users will not have any issues on connecting behind their firewall since all are standard ports.
2) Its very good to troubleshoot in analyzing a particular service traffic statistics, Logging and easy to identify the issues with the logs packet capture etc..,

Disadvantages of doing this:

1) The edge services rely on multiple A records with the same name but different IP addresses. So its not service aware configuration and failure detection rate and routing to the available server is not possible.

But still i would go with this option considering the failure detection rate is very minimal in a well planned deployment and strong n/w considering very helpful and easy during any troubleshooting scenarios.

Below is the example of DNS load balancing using NAT

Lets assume i need to load balance 2 edge servers using DNS Load-balancing NAT as per below environment.

sfb

Below is the DNS configuration

sfb3

sfb2
DNS Load balancer using Public Ip Addresses:

By doing this we are using one public IP for all 3 services on each server and differentiate them by TCP/UDP port value.
We are directly assigning the public IP’s on the edge servers one of the 2 NIC’s which should be external NIC.
Three private IP addresses are assigned to this network adapter, for example 131.107.155.10 for Access Edge service, 131.107.155.20 for Web Conferencing Edge service, 131.107.155.30 for A/V Edge service.
The Access Edge service public IP address is primary in the NIC with default gateway set to the external Firewall.
Web Conferencing Edge service and A/V Edge service private IP addresses are additional IP addresses in the Advanced section of the properties of Internet Protocol Version 4 (TCP/IPv4)

Disadvantages of doing this:
It is not recommended, to use a single public IP address for all three Edge service interfaces.
Though this does save IP addresses, it requires different port numbers for each service.

Access Edge – 5061/TCP
Web Conferencing – 444/TCP
A/V Edge – 443/TCP

These might cause issues for remote users connecting externally from a n/w where their firewall doesn’t allow the traffic over TCP 5061 port.
Having three unique IP addresses will help us in easily doing a packet filtering to identify and resolve the issues.

Hardware load balancing using public Ip Address:

Load balancing is only need for old OCS clients and xmpp, but works fine if both edge server are up. From Lync 2010 Microsoft does not recommends to load balance the Edge services from internet.

We are creating a virtual Ip address for each services that edge serves (Access, WebConferencing, A/V) on the load balancer like F5, KEMP etc..,
Behind this Virtual Ip’s we need to add the edge servers associated for the services.
The main benefit of this is failure detection rate is very quicker since it detects the failure from the server side.

Disadvantages:

1) The A/V services will not see the client’s true IP ( for example in a peer to peer audio call for a user connected from external to internal)
2)Few challenges in configuring the outbound client connections going from the edge to internet (Routing & SNAT)

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services


Event Viewer Warning 1040- Active Sync Direct Push technology

$
0
0

We Might notice this error on the Event Viewer on Exchange Servers for the source MsExchangeActiveSync

Untitled.png

Event Type: Warning
Event Source: MSExchange ActiveSync
Event Category: Requests
Event ID: 1040
Date: 3/10/2016
Time: 12:54:22 PM
The average of the most recent [513] heartbeat intervals used by clients is less than or equal to [540].
Make sure that your firewall configuration is set to work correctly with Exchange ActiveSync and Direct Push technology. Specifically, make sure that your firewall is configured so that requests to Exchange ActiveSync do not expire before they have the opportunity to be processed.

This warning is not an issue on the Exchange Servers.This is something mismatch value configured on the Network Load Balancer which serves the Client is not configured correctly.

Active Sync Uses Direct push Technology to retrieve the emails from the server. Inorder to initiate a direct push communication between the ActiveSync Client and the Exchange Server it uses the heart beat interval values.

In order for the Direct Push Technology to Work it involves 2 process one request from the ActiveSync Mobile(Client) and the response from the Exchange Server.When the Client notifies any changes on the users mailbox the changes are transmitted over persistent http or https connection through direct push.

Below is the process of ActiveSync Request to the server:

1)The Client issues a http request to Exchange Server asking for any changes occurred in the user mailbox in the specified time.Basically it queries inbox,contacts,calendar etc…

2) After Exchange Receives this request it looks for the specific mailbox and sees the changes in the folders until the specified time limit expires.After the time out period exceeds it issues an http 200 OK response to the clients. It then gives a response request to the client with all the update about the folders.

3)The Client then receives the response from Exchange and can be any of the below :

HTTP 200 OK – No Change on Folders . If this is the case the client will reissue the ping request on next heartbeatinterval value.
HTTP 200 OK – Change in folders – And will get the updates on each folders that was changed. After the sync is done it will reissue the request in next interval.
NO Response – It lowers the time interval in the ping request and then re-issues the request again in the minimum heartbeatinterval value to get the update.

So basically these HearBeatInterval values should match between the values set on Network Load Balancers and the Exchange .Servers.

Lets have a look at the values of HearBeatInterval on Exchange Servers.

Where are these Values Stored in Exchange 2016 ?

These Values can be seen in the web.config file in the below location in the installation directory

C:\Program Files\Microsoft\Exchange Server\V15\ClientAccess\Sync

There are 4 values as below

untitled1
MinHeartBeatInterval – The minimum number of seconds that the client waits between issuing heartbeat commands to the server.The default value in Exchange 2016 is 60 seconds. If this value is too small the client will send the http request very often and will consume the power of the device.

MaxHeartBeatInterval –The maximum number of seconds that a client waits between issuing heartbeat commands.The Default value is 59 Minutes on Exchange 2016 Server.

HeartBeatSampleSize- This is a bucket where the server collects all the recent heart beat intervals that the server received from the Active Sync Clients.It keeps this value to see how the clients are sending the activesync http request to the server and ensures they are matching with the specified values. The default value is it waits for 200 heart beat intervals.

HeartBeatAlertThreshold- If the collected HBsamplesize  value is more than or not meeting the configured value heartbeat maximum or minimum value in this specified time interval then it logs an event in the application log. The default value configured is 9 minutes.

Lets say if the HTTP(S) connections time out value is not configured as longer than 59 minutes on the firewall and if its value is lesser than the value on Exchange Servers, Once a ActiveSync http request is timeout on the F/W, ActiveSync Mobile client will sent another Http request which may cause connection overload.
In-order to avoid this the Exchange server will trigger an alert and mark an event in the event log.

A short living time-out value will initiate new http requests from the mobile device more frequently.This will also drain the battery of the device very quickly considering more http requests are initiated from the device.

The best practice is to increase the firewall Time Out Values for http requests to Exchange Servers Active Sync Virtual Directory to give a better experience to the users. The time out value on the firewall can be equal to or greater than the values specified on the Exchange 2016 servers.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services.


Troubleshooting endpoint URL’s for Exchange & Skype for Business

$
0
0

This article outlines the client troubleshooting end points that can be used for Exchange and Skype for Business services.

For Exchange

To verify Exchange autodiscover Service endpoints:
https://yourdomain.com/autodiscover/autodiscover.xml

Usage:Main purpose of autodiscover is to establish,discover and make initial connections to their mailboxes.
Also it keeps updated on the outlook on frequent changes of mailboxes and updates the offline address book.

To verify Exchange Exchange Web Service endpoints:
https://yourdomain.com/ews/ews.xml

Usage: EWS applications to communicate with the Exchange server mainly for developers to connect their clients and get the email connectivity for their applications via SOAP.

To verify Offinle Address Book Service endpoints:
https://yourdomain.com/oab/oab.xml

Usage: An offline address book provides local copy of address list to Microsoft Outlook which can be accessed when the outlook is in disconnected state.

To verify ActiveSync Service endpoints:
https://yourdomain.com/Microsoft-Server-ActiveSync

Usage:By using Activesync protocol users can configure and sync their emails on their mobile devices.

To verify Webmail Service endpoints:
https://yourdomain.com/owa/owa.xml

Usage:Outlook Web App is a browser based email client used for accessing emails via browser.

To verify exchange control panel Service endpoints:
https://yourdomain.comecp/ecp.xml

Usage:The Exchange Control Panel is a Web application that runs on a Client Access service providing services for the Exchange organization

To verify MAPI service end points:
https://yourdomain.com/mapi/mapi.xml

Usage:New protocol outlook connections introduced from Exchange 2013 SP1 which enhances faster connections only through TCP and eliminating the legacy RPC

To verify the RPC service end points:
https://yourdomain.com/rpc/rpc.xml

Usage:Not used on new versions of exchange and almost retiring type for client connections.

All the above URL’s will be listening on Exchange 2016 Mailbox Server Virtual Directories.

pastedimage

For Skype for Business:

Mostly for the chat services provided through Skype for business the main URL end points are Chat,Meet,Conference,Audio/Video and lyncdiscover.
We usually check these URL’s during any troubleshooting scenarios.

Below are the additional end points which can be seen and kept for additional references.

To test conferencing URL:
https://meet.domain.com/meet/

Usage: Meet is the base URL for all conferences in the organization.

To Verify  Dial in URL :
https://dialin.domain.com/dialin/
Usage:Dial-in enables access to the Dial-in Conferencing Settings webpage

To Verify Lync control panel:
https://sip.internaldomain.com/cscp

Usage:Must be only added and accessed from intranet site and no need to publish on the internet.

To verify the autodiscover web site and retrieve the redirection information for Client:

https://poolexternaluri/autodiscover/autodiscover.svc/root
https://poolexternaluri/reach/sip.svc

Usage: They are the service entry points for the Autodiscover service and they are required.They are the Lync Server Web Service Autodiscover Response which was sent from the clients.They are the URL for the Authentication Broker (Reach) web service

To Verify Mobile Client Connectivity:
https://poolexternaluri/webticket/webticketservice.svc

Usage:Specifies the default authentication method used for mobile client connectivity.
This is a SOAP web service that authenticates a user via NTLM or Kerberos (if configured) and returns a SAML Assertion (Ticket) as part of the SOAP Message response.

To check that the mobility service is working use the following url.
https://poolexternaluri/mcx/mcxservice.svc
This is the URL required for the Skype Mobility Services

https://poolexternaluri/supportconferenceconsole

Usage:Listening port for the Support Conferencing Console. The default value is 6007
Port used by the Office 365 Support Conference Console. This console is used by support personnel to troubleshoot problems with conferences and online meetings.
To verify the persistent chat:

https://PCpoolexternaluri/persistentchat/rm/

Usage:There are actually a Virtual directory for Persistent Chat, both on External and Internal web site So for external testing access the url from the published persistent chat FQDN

Verify hybridconfig service:
https://poolexternaluri/shybridconfig/hybridconfigservice.svc

Usage:Not sure this might be used for hybrid connectivity beween Skype for Business Server and Skype for Business Online

To check the address book issues:
https://poolexternaluri/abs/handler

Usage:GAL files are downloded from the FE server IIS

Check the below URL for distribution group expansion:
https://poolexternaluri/groupexpansion/service.svc

Usage:They are configured for via windows authentication by default.

https://poolexternaluri/certprov/certprovisioningservice.svc

Usage:This parameter can be used instead of the WebServer parameter in order to specify the full URL of the Certificate Provisioning Web service. This can be useful when the calculation used in WebServer will not yield the correct URL.This parameter is optional, and is used only when SipServer is provided.

This is needed when the Lync Server web server is not collocated with either the main Director or within the Front End pool in a site.
This might be due to a load balancer configuration where web traffic is load balanced differently to SIP traffic resulting in different FQDNs for the SIP and web servers.

All the above SFB URL’s will be listening on front end server

sgf

On accessing these URL’s if we are not prompted with username and password then troubleshooting steps needs to be performed accordingly to the message we received  to identify the issue. In most cases the URL’s might not be published correctly to be accessed from the remote end points or there might be the issue with the authentication or the virtual directory/server/services itself.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services 


Active Manager operation failed attempt to copy the last logs from the sourceserver failed

$
0
0

During a fail over DR cases when the Main site is completely not available we need to carry over few steps to activate Exchange Services according to the type of DR setup we have.

Sequential steps needs to be carried over in terms of  restoring the DAG,activating the DB’s on the DR site pointing the exchange DNS records to the DR site ip’s.

Failover scenarios varies according to the namespaces, no of sites in Exchange :

UnBound Name Space- Single name space for all Exchange URL’s for both the main and DR sites which is best recommended.
Bound Name Space – Very complicated and not recommended since we need to use seperate URL’s for Main and DR site.

If we have a three site setup with FSW in third site or if the FSW is placed in the Azure directory in the 3rd site then no manual activation of the database copies on the DR site is required. Only exchange DNS job on the DR site is required.

For detailed information on DAG DR setup i have written a previous blog which can be referred:

https://exchangequery.com/2016/05/04/dag-in-exchange-2016-and-windows-server-2012-r2/

From Exchange 2013 the Dynamic Quorum in the failover cluster adjusts automatically and recalculates the active nodes if its on a sequential shutdown for a two site setup.

During a DR activation in the DR site when the main site is completely not available after rebuilding the DAG cluster on the DR site we might come across the below error for some databases

In my test case it was the below:

Stop-DatabaseAvailablityGroup – for the Main site completed successfully with no errors
Restore-DatabaseAvailabilityGroup – completed successfully except some warnings for one mailbox node on the DR site.

On the server with warning noticed that all the DB’s were in failed state.Tried to mount them and got the below error

An Active Manager operation failed. Error The database action failed. Error: The database was not mounted because its experienced data loss as a result of a switchover or failover, and the attempt to copy the last logs from the sourcserver failed. Please check the event log for more detailed information. Specific error message: Attempt to copy remaing log files failed for database DBNAME. Error: Microsoft.Exchange.Cluster.Replay.AcllUnboundedDatalossDetectedEeption:

By looking into the above message its very interesting to see that the DR site DB’s are trying to reach the Main site copies to the get the information though the DAG cluster is activated on the DR site and the PAM is on the DR.

The below command can be used just in case if the DR copies are not mounted after activating the DR site DAG.

Move-ActiveMailboxDatabase “DBNAME” -ActivateOnServer DRMailboxServer -SkipHealthChecks -SkipActiveCopyChecks -SkipClientExperienceChecks -SkipLagChecks -MountDialOverride:besteffort

So we need to be very clear that this error will not occur normally until and unless there is some data loss for any DB’s during the DAG DR activation.

Usually when we do a Restore-DatabaseAvailabilitygroup on the DR site all the DB’s should be mounted on the DR site.

The above command can be run only if the database copies are in a failed state after DR site activation and if they are not getting  mounted.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services


Skype for Business Persistent Chat Migration to new Pool

$
0
0

We might come across a scenario where we need to migrate the SFB servers to new pool.
There are few cases where we need to upgrade the hardware from old servers to New High performance servers on which they are running or there might be case where they need to be virtualized from hardware to VM.
This article focuses only on migrating the persistent chat pool from old server to the new server.

Below are the readiness to be completed before starting the Persistent Chat Migration:

1) The new Persistent Chat Pool should be already published in the Topology.
2) The new Persistent Chat nodes should be already added in the new pool and SFB setup Wizard should be completed.
3) Certificates should be already assigned to the new Persistent Chat Pool.
4) Connectivity from the OLD PC pool to the new SQL DB is already established.
5) Connectivity from the new PC pool to the old SQL DB is already Established.
6) Establish a connectivity from the old PC hosts to the new PC hosts

To Start the Migration:

Check your current persistent category,Addin,Policy and configuration.
This can be verified by checking through control panel persistent chat tab or through Shell.

To Check Persistent Chat Category:

Get-CsPersistentChatCategory -PersistentChatPoolFqdn “Pchat.exchangequery.com”
Make a note of the current number persistent chat rooms

To Check the rooms:

Get-CsPersistentChatRoom | select Name

To Check the Disabled rooms:

Get-CsPersistentChatRoom -Disabled:$True

After confirming that these disabled rooms will not be in use we can remove them before we migrate since there is no use of moving these obsolete ones to the new pool.

Get-Cspersistentchatroom -Disabled:$True | Remove-CsPersistentChatRoom

Export the Old Pool Persistent Chat Configuration by Running the below command:

Export-CsPersistentChatData -DBInstance “SQLCL01.Exchangequery.com\SFBDB” -FileName “c:\temp\PChatBckup.zip”

The exported Configuration data will look in XML as below

untitled2

Import Persistent Chat data that we exported to new Skype for Business Pool:

Import-CsPersistentChatData -DBInstance “SQLCL02.Exchangequery.com\SkypeDB” -FileName “c:\temp\PChatBckup.zip”

We will get a confirmation as below before the import and  the progress bar

untitled3

untitled4

Once the above command is done we can see the old PC config data imported in the MGC DB in the SQL.

After the above command is run we can see the chat rooms are duplicated since it created the new instance in the new pool.

Later we can delete them by running the below command:

Get-CsPersistentChatRoom -PersistentChatPoolFqdn “Pchat.exchangequery.com” |Remove-CsPersistentChatRoom

Then remove the persistent chat category:

Get-CsPersistentChatCategory -PersistentChatPoolFqdn “Pchat.exchangequery.com”| remove-cspersistentchatcategory
After this is done go ahead and try logging into the Persistent Chat Enabled User and see the results.

In my case what happened was the connections were still going to the old Persistent Chat Pool

Guess it was because the Old Persistent Chat Pool was First in the Persistent Chat Pools in the list on Topology Builder.
So Went ahead and removed the old persistent chat pool from the Topology , Publised the Topology , rerun the setup on new PCHAT nodes.

After this the new connections were going to the new Persistent Chat pool.
All my Persistent Chat rooms that i was member of was present AS IS and only thing is that the rooms that i was following disappeared from my list.
That was a small thing only and i was able to search those rooms and follow them again.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services



Configure SCOM to monitor servers in the DMZ

$
0
0

SCOM requires Mutual Authentication to Trust and Communicate with the agents for Monitoring and reporting.Initially SCOM tries to establish kerberos authentication with the agents. This happens for all internal agents which is joined in the domain.
For the workgroup machines which are in the DMZ network SCOM use the certificate based authentication for secure communication and then it monitors them.

Below are the high level steps:

1)Configure your firewall to pass traffic from DMZ agents(DMZ servers) to SCOM management server’s port 5723 & 5724.
2)Request certificate from all DMZ machines(certificate type must be server authentication & Client Authentication)
3)Request certificate from SCOM machine (certificate type must be server authentication & Client Authentication)
4)Import the server authentication & Client Authentication certificates on the DMZ machines
5)Import the server authentication & Client Authentication certificates on the SCOM 2012
6)Run the MOMCERTIMPORT on all Machines and assign the certificate
7)Approve the DMZ agents in the SCOM Server.

For Publish Certificate request for SCOM  there are 2 types based on the CA we have.

  1. Enterprise CA.
  2. StandAlone CA.

1) Enterprise CA

If we are going to request certificate from Enterprise CA then we need to use Publish a Certificate Template for SCOM through your enterprise CA.

To perform the task  through enterprise CA do the below :
Open Certificate Authority – Navigate to Certificate Templates – And Select Manage

sc1

Right click the Computer Certificate and Click Duplicate

dmzsc

Make sure the option allow private keys to be exported is chosen.

dmzsc

The most important thing that we need to note is that in the extensions it need to have both server and client authentication enabled. This is applicable for both the SCOM and the DMZ hosts throughout the configuration no matter we are requesting them either from Enterprise CA or Stand Alone CA.

dmzsc

Once the above is completed we can import this duplicate certificate to the SCOM.

2) StandAlone CA:

Below are the steps that needs to be carried over for Stand Alone CA SCOM Certificate Request:

Go to the SCOM 2012 Server

Connect to the computer hosting certificate services

https://ca.exchangequery.com/certsrv

dmzsc

Click request a certificate and submit advance certificate request

dmzsc

Click create and submit request to this CA

After that we will get confirmation on web access information as below and click yes

dmzsc

Below are the information that needs to be filled

Name – name of the server requesting the cert.

Type of Certificate – Choose Other

In OID  enter – 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 (This plays a major role in enhanced key usage)

dmzsc

Keyoptions – Select Create new key set

CSP – Select Microsoft Enhanced Cryptographic Provider v1.0

Key Usage – Select Both

Key Size – 1024

Select – Mark Keys as exportable.

Request Format – CMC

Hash Algorithm – SHA1 and give friendly name and submit.

DMZsc.png

Once the CA request is completed from the CA we can go ahead and import them on the SCOM server.

Request certificate for DMZ Servers to be Monitored:

First and the foremost thing is that wecan request the Certificate from internal domain server since most of the times the DMZ servers will not have access to certificate web enrollment services on port 443 to the internal certificate authority server.

So what we can do is generate cert request from one machine in the domain nw and then import them to the DMZ servers.

Perform the same process of submitting the certificate request for all the DMZ servers

Below are the information that needs to be filled

Name – name of the  DMZ server that requires the certificate.

Type of Certificate – Choose Other

In OID  enter – 1.3.6.1.5.5.7.3.1,1.3.6.1.5.5.7.3.2 (This plays a major role in enhanced key usage)

Keyoptions – Select Create new key set

CSP – Select Microsoft Enhanced Cryptographic Provider v1.0

Key Usage – Select Both

Key Size – 1024

Select – Mark Keys as exportable.

Request Format – CMC

Hash Algorithm – SHA1 and give friendly name and submit.

Once the above is done we need to approve the request from the CA and then import them on the server from where we requested the certificate for those DMZ machines.

Now we need to export this certificate from this requested machine and them import them on all DMZ servers which needs to be monitored.

There are multiple ways of doing this. I prefer doing this via Digicert Windows Utility Tool.

Download  the DigiCert Windows utility tool from the below url on the certificate requested machine

https://www.digicert.com/util/

On opening we  can see all the issued SSL certificate which owns the private key on that machine.

Select the DMZ  servers requested certificate and click on export

dmzsc

Select the option export the private key and export them with password.

dmzsc

Once the above steps are completed we need to import these certificates on the DMZ servers computer personal store.

We can use the same certificate import wizard like below and import the above certificate on DMZ servers

dmzsc

Now the final step is to run the MOMCERTIMPORT on all Machines and select this certificate and we are done.

This tool MOMCERTIMPORT GUI can be found on SCOM 2012 Installation Media path in below directory

E:\supporttools\AMD64\MOMCERTIMPORT

Make sure the same version of the tool from the setup is copied to all machines

Just run this tool on all machines and we will get a pop up window to confirm the certificate. Please confirm  by choosing our relevant requested certificate on all servers.

After the above is completed wait for some time and these DMZ servers will appear on the Administration – pending in the SCOM server and just we need to approve them and we are done.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services


Skype for Business Unable to present Desktop – Call failed to establish due to a media connectivity Failure

$
0
0

All Skype for Business Clients from remote locations were unable to present the screen sharing through meet now ,peer to peer and conference.
This a new deployment and users were unable to present desktop.

Below were the test scenarios:

1st test – from remote users n/w to my home n/w – received error (we couldn’t connect to the presentation because of n/w issues. Please try again later)
2nd test – from remote users n/w to my office n/w – received error (we couldn’t connect to the presentation because of n/w issues. Please try again later)

Below troubleshooting were done :

1)Did a telnet to lyncdiscover.domain.com on port 80 and 443 – ( This was done just to make sure the clients when logging in gets all the updated info of the pool,SFB config etc..,)
2)Did a telnet to meet.domain.com on port  443 – successful
3)Did a telnet to join.domain.com on port  443 – successful
4)Did a telnet to av.domain.com on 443 successful

Assume the below scenario deployment:
1)The edges were in DNSLB and were in scaled consolidated topology using NAT.
2)UDP 3478 for AV service external IP.
3)TCP 443 for external IP’s.
4)Port 50k was blocked in my case since no legacy OCS clients.
5)No edge hair pin traffic is allowed for Audio and Video Public Ips.

DMZsc1.png

Did a Snooper trace from the affected remote client and got the following info on the snooper logs

Getting  error as call failed due to media connectivity failure when both the end points are remote.

snoop

Now this is the time for me to dig into the analysis of in which protocol fashion the SFB clients establishes the connection.So started to explore on STUN,TURN & ICE since ever i was having a glossy look on these topics.

So what kind of protocols they use:

SFB/Lync uses all these 3 protocols to establish a media connectivity:

ICE:
The stands for Interactive Connectivity Establishment protocol for communications. All Lync/SFB clients are ICE clients and use ICE to try and establish connectivity between itself and another ICE client.Remember this is the main protocol which functions as the core and wraps the other 2 to establish a path.

STUN:
The new name for this acronym is Session Traversal Utilities for NAT.
This will allow the SFB client to discover the available public IP for the SFB media path inorder to establish the connectivity.

TURN:
Traversal Using Relay around NAT.
This will establish a chain of connection between the external client and the client inside the network.By using this edge servers will create a chain and will offer ports on UDP and TCP for the media path. Once this chain is established it promises the remote client to send its media connection to the internal network client.

So now we can understand clearly that the External Corporate firewall requires a Hairpin traffic to be allowed for the A/V edge Public Ips for the STUN and TURN to work in the required  UDP  TCP path.

Since these are the most commonly used RFC standard protocols SFB clients also uses them. These all are IETF standards protocols and hence Microsoft also uses them.
Now the SFB clients will use the below process to establish a media connectivity with the remote client:

Candidate Discovery:
Where the clients discover their available public IP addresses for media connectivity. These include both STUN and TURN addresses of the Edge server.

Candidate Exchange:
This is the place where both the SFB clients sends each other list of addresses on which they can be communicated for this media path.
Remember this will happen bidirectional.

Connectivity Checks:
This is where both the candidates(clients) try to establish a connection on all these addresses simultaneously (not one by one).
Finally the result would be the SFB client will pick any one of the available route and establish a connection with the client whoever is responding first.

Candidate Promotion:
This is the Final stage of the SFB client and happens after the call is established and its running.
Here the clients if identify any path which is more optimum and quick they decide to change that route which gives the better experience to the user.

These candidate information can be seen in snooper logs

We can see 3 types of candidate information

The first one below is for port 50k and can be ignored if you are not having this option

DMZsc1.png

The second one is for audio and last one will be for video. We will have the same like one for audio with label main mentioned as audio.

DMZsc1.png

Lets say if we have only port 50k opened and not 443 for UDP then we can see only those  50k candidate lists.

TCP-ACT indicates that with this candidate pair the client is able to send RTP and RTCP traffic

DMZsc.png

By having a look at it we can confirm that the candidate is a STUN pair. TCP-ACT and typ srfx raddr is the thing that indicates they are STUN pair.

In this case if the candidate discovery fails in all the cases we can find  BYE sip in the snooper logs and which mentions opaque=epid followed by guid

There are 2 solutions for this problem to work:

Allow Port 50k inbound:

We can  allow the media communications on this edge Audio/Video Ip only on port 50 K. But at real times when users connecting from different network for the media path they need to cross firewalls where they might have only the standard 80 & 443 allowed and these ports might be blocked.

Allow the hair pin edge traffic:

Allow the traffic on the edge server external firewall  to traverse the traffic between the two AV Edge servers public IP addresses. This will give the appropriate candidate lists for the clients connecting via different edge servers on UDP port 3478 through this hair pin traffic.

Note:

1)If we have only one edge server installed we do not need to follow this steps since all the clients will connect only to one edge server node and no issues will be identified. Just make sure the UDP 3478 is opened for this communication.

2)SFB  clients will always try to establish media path  via UDP as preffered if its available. If UDP isn’t available it tries to switch to TCP and establishes the connectivity.

Thanks & Regards
Sathish Veerapandian
MVP- Office Servers & Services.


Read MAC EMLX apple email from Windows and MAC devices

$
0
0

What is EMLX File?

Mac Operating System come configured with Apple Mail or the Mail.app since version 10.0. Like many OSs, Mac OSX includes Apple Mail as its default-messaging platform for desktop communication. The set of qualitative attributes in Apple Mac has already made it a standard messaging platform amongst users of Apple Mac system. The improvement adopted by Mac OS version has resulted in it gaining a great number of users, thus, making Apple Mail to become the most clear communication medium by Mac users, owing to its uncomplicated reachability. All these aspects have bring out Apple Mail in notice of investigators due to the fact that Mac supported applications confront complications during the procedure of investigation due to lack of a dedicated available.

Location of EMLX File

A file with the EMLX extension is an Apple Mail Email file created with Apple’s Mail program for Mac OSX.  EMLX files are plain text files which store just a single email message. They are normally found on a Mac in ~user/Library/Mail/ folder, available below the /Mailboxes/ [mailbox]/Messages/ subfolder or sometimes within the subfolder /[account]/INBOX.mbox/Messages/.

Why need arise to view EMLX file?

Many reasons are available, making it obligation for the users to search for an EMLX Viewer as per their requirements mentioned below:

  • EMLX file corruption or failed to open. And users have the urgency to view the crucial email messages, without waiting for the installation of the particular email client.
  • View EMLX email messages received as an attachment, which are damaged in between transit.
  • Need to open Apple Mail EMLX file in Windows OS, saved in any external storage device.

Free EMLX Viewer – Open EMLX Files from Apple Mail to read Messages

EMLX Viewer Windows is an easy to use program which provide the possibility to open and view EMLX files from Apple Mail on Windows. However, it also works with the regular EML file format. This is a portable and freeware solution which comes in a handy if you do not have the Apple Mail client installed to view EML messages, especially since it does not require you to set up the mail account. You only need to point to the file and open it.

1.png

Although EMLX File Reader does not require installation, you must know that it creates cache files in the same directory as itself when opening EMLX files. As far as Interface is concerned, the mail tool adopts clean window with a native structure, where you can get started by opening an EMLX file or the entire folder which contain multiple emails. The emails are neatly organized in a tree view structure on the left and can be accessed from the right. In addition to the message, you can view graphical content, attachments, and header information such as sender, receiver, subject and date.

If you need to deal with large amounts of text, you can make use of built-in search function to look up information across the whole raw messages or only in the shown headers. Search results can be restricted by specifying the start and the end date. Moreover, you can change the date format and refresh all the displayed messages if any modification were made in the meantime. EMLX files are automatically converted to EML format, so simply double click an entry present in the list to open the location in Windows Explorer and view the messages and attachments in EMLX.

There are no compatibility issues involved in the software as the utility can easily run on all the version of Windows operating system ranging from Windows NT to Windows 10.

Source URL – http://www.bitrecover.com/free/emlx-viewer/

Thanks & Regards
Rollins Duke
Technical Analyst


Migration status of mailboxes movement in Exchange 2016

$
0
0

Mailbox replication service is the service responsible for moving the mailboxes,mailbox imports,mailbox exports  and restore requests.

This article focuses on the migration status of the migration batch in Exchange 2016.

The move request statistics can be viewed by running the below command

Get-MoveRequestStatistics | Select DisplayName,StatusDetail,PercentComplete

Below were the status reasons of the migration notified for delayed migration batches:

Stalledduetotarget_dataguaranteewait:
From Exchange 2010 there is an Data Guarantee API that is used by Mailbox Replication service (MRS) to check the health of the database copy architecture based on a defined setting of the database.
This API is called by the MRS to see the following information:
Check Replication Health – Confirm that the prerequisite number of database copies is available.
Check Replication Flush – Confirm that the required log files have been replayed against the prerequisite number of database copies.
After this message If a Satisfied response is returned within the 15 minute stalling period, MRS will automatically resume the move request.

This is usually triggered during the move request to determine the health of the target database copies to which the mailboxes are moving from the legacy servers.
If the Data Guarantee API returns a NotSatisfied or a Retry response, MRS will queue the move request and retry the query every 30 seconds.

The parameters controlling these values can be seen in “MSExchangeMailboxReplication.exe.config” file located at “C:\Program Files\Microsoft\Exchange Server\V15\Bin”

Parameter Name                                        Default         Min        Max
DataGuaranteeCheckPeriod                     00:00:05      00:00:01   02:00:00
DataGuaranteeTimeOut                         00:10:00      00:00:00   12:00:00
DataGuaranteeLogRollDelay                   00:1:00       00:00:00   12:00:00
DataGuaranteeRetryInterval                   00:15:00      00:00:00   12:00:00
DataGuaranteeMaxwait                         1.00:00:00    00:00:00   7:00:00
EnableDataGuaranteeCheck                 True                    False       True

Stalledduetotarget_mdbreplication:
This value is also returned from Data Guarantee API on checking the replication health of the target database copies if they are member of DAG and have database copies.
We might get this message if the MRS service is waiting to get this information from the target server about the replication status of the database copies.

So in this case the passive copy must be:
1)Healthy.
2)Must have a replay queue with 10 mins of replay lag time.
3)Have a copy queue length less than 10 logs.
4)Have an average copy queue length less than 10 logs.

Below are the parameters controlling in the msexchangemailboxreplication config file:
mdb latency health threshold
mdbfairunhealthylatencythreshold
mdbhealthyfairlatencythreshold
mdblatencymaxdelay

So at the end all the database copies must be healthy if we are randomly distributing mailboxes to the target destination.

Stalledduetohigherpriorityjobs:

We might get this status if the Exchange server Workload management introduced from Exchange 2013 is making  the exchange system resources busy on other exchange operations and hence the move requests are affected.

First preferred option is we can submit the new move requests by modifying the Priority to emergency or highest by running the below command.
New-MoveRequest -Identity Mailbox -TargetDatabase “DB Name” -BatchName Test -Priority Highest

StalledduetoCI:
This is caused due to Content Indexing on the database copies, so to solve this by turning it off on the Mailbox Database till the migration is complete for that DB where the mailbox resides.

To turn it off run the below command :
Set-MailboxDatabase “your mailbox database” -IndexEnabled:$False

Note: This should be re-enabled once the migration has completed
This error might not happen in Exchange 2016 environments since the indexing process has been completely changed from Exchange 2016.

Stalledtotarget_disklatency:

This might happen if there are any issues in the disk performance ,causes the disk latency ,the response time from the source is getting high and the migration batches are getting timed out. This delays the movement of the mailboxes.Should start checking the target exchange 2016 disk performance IOPS etc. If we get this then there is some serious problems in the exchange 2016 performance .And this depends on the designed storage architecture, how the database copies are distributed with how many mailboxes in each copies.

Relinquishedwlmstall:

We might get this because of large delays due to unfavorable server health or budget limitations.
In most practical cases we can notice this status when moving a large mailboxes batch of size more than 5GB.

These are the parameters controlling this:
WlmThrottlingJobTimeOut
WlmThrottlingJobRetryInterval

The best solution for this is to move the large mailboxes on batches so that the system resources are sufficient to handle the migration.

Below are the major parameters that is controlling the migration on the Exchange 2016 servers:

“MSExchangeMailboxReplication.exe.config” file located at “C:\Program Files\Microsoft\Exchange Server\V15\Bin”

MaxRetries – 60, 0, 1000
MaxCleanupRetries – 480, 0, 600
RetryDelay – 00:00:30, 00:00:10, 00:30:00
MaxMoveHistoryLength – 5, 0, 100
MaxActiveMovesPerSourceMDB – 20, 0, 100
MaxActiveMovesPerTargetMDB – 20, 0, 100
MaxActiveMovesPerSourceServer – 100, 0, 1000
MaxActiveMovesPerTargetServer – 100, 0, 1000
MaxActiveJobsPerSourceMailbox – 5, 0, 100
MaxActiveJobsPerTargetMailbox – 2, 0, 100
MaxTotalRequestsPerMRS – 100, 0, 1024

Important tips to note down before migration:
1)Ensure there is no file level antivirus running on the migrating target servers.
2)Copy a 1GB file from the source server to the target server and verify the copy speed to ensure there is no network issues.
3)Make sure there is no backup jobs running during the migration batch period.
4)Better to initiate a small migration batch first of say 500 users and then open the perfmon during this period and monitor the memory,cpu,storage to make sure the resources are sufficient.

Note: Do not modify any values in the MSExchangeMailboxReplication.exe.config for any reasons. Better to open a call with Microsoft if any issues is identified in the maibox migration batches.

Thanks & Regards
Sathish Veerapandian
MVP- Office servers and Services


Customize Meeting responses to HTML tag in Exchange 2016

$
0
0

By default when a meeting room response is received the end user receives a plain message that says your request was accepted.

This response  is ok for the internal users since they are aware of where the meeting room is located.
But when a external person or vendor is invited for the meeting it makes really difficult for that person to find the office and meeting room location.

This blog focuses on adding the meeting room location for the meeting room response in html,so that the external users can find the location of the office and the meeting room easily.

If we require to add only the additional response with basic plain text we can use the below command and add the required text message.

Set-CalendarProcessing -Identity “phoenix” -AddAdditionalResponse:$true  -AdditionalResponse:”Welcome to Phoenix Meeting Room”

But the above command will not help us in adding any html tags and company logos for the meeting response.

In order to add the custom HTML tag we can perform the below steps:

Adding html tags in meeting response is possible by accessing that resource mailbox via ECP through delegated admin account for that resource mailbox.

https://yourdomain.com/ecp/phoenix@exchangequery.com

After opening the resource mailbox via ECP navigate to settings

meeting2

After that enable the tick add additional text and add the required html tag.

Adding the direct link here will not run the HTML and show the actual links in the meeting response.The big change here from Exchange 2010 version is that we need to add the actual html code as shown in the below example.

meeting3

Just playing around with the simple html and adding the required values will suffice this requirement.

Also we can refer a background image company logo uploaded in the sharepoint sites to these meeting responses which will give a better look.

In below case have added only the office location so that the users can drive in easily and reach for the meeting and the company logo  fetched from SharePoint sites for better look with the below HTML tag.

<DIV><FONT size=2 face =Tahoma>For the office location, <A href="https://enter yourgooglemapslocationhere">Click here</A>
Address:
ExchangeQuery.
Jumeriah lake Towers
Opposite to Downtown
<div ><img src="https://exchangequery.sharepoint.com/Shared%20Documents/%24_109.jpg"></img></a></div>
</FONT></DIV>

After adding the above html  users get the meeting room location and the company logo at the bottom in their meeting response like below example.

meeting4

Make sure to use the supported  image formatting as per the below tech net source

http://technet.microsoft.com/en-us/library/bb124352.aspx#Images

Hope this helps

Thanks & Regards
Sathish Veerapandian
MVP – Office Server and Services


Frequent Popups in Outlook -The Microsoft Exchange Administrator has made a change that requires you quit and restart Outlook

$
0
0

This error message can  frequently appear for users after the mailbox migration from Exchange 2010 to 2013 or 2016 .

The actual cache is that this error will be coming up only for few users and it appears to be perfectly fine for rest of the users.The thing is that the Outlook will appear to be working fine , users will be able to send/receive emails except for this annoying message keeps prompting the users very often.

On Further Analysis identified that this occurs only for users who have  multiple delegated accounts mapped  under Outlook.The User mailbox resides  on different database and the mapped Delegated accounts resides on different databases.

The delegated account is not fully established the connection to the new Mailbox Databases after the migration due to some reason and the users delegated mailbox table did not receive the delegate permissions accounts information. We can further look  a deep analysis on the mailbox tables on the affected user by using MFCMAPI  and looking into ACL tables but then that will consume a lot of time.

Mostly the below two solutions will  fix this issue:

1)Recreate the Outlook profile which will reestablish the connectivity to the new databases for the delegated accounts and update the mailbox table for this user.
2)Moving the mailbox to a different database which will reset the mailbox table receive folder values , update the ACL tables for delegate accounts and solve the issue.

But still not sure what is causing this issue
Also there is one more possibility which might cause this issue
The msExchHomePublicMDB attribute on Exchange 2016 databases should not have the legacy public folder object(Exchange 2010).

If we find this value in Exchange 2016 databases we can go ahead and remove them ,Since there are no more OAB end points  that depends on PF’s and no more Outlook clients that require PF’s in Exchange 2013,2016 Environment.

Inorder to remove them perform the below:

Open ADSIEDIT.MSC – Configuration Container – Navigate to Configuration Container – Expand Services – Microsoft Exchange – Domain – Administrative Group – Exchange Admininstrative Group – Databases – Right click on the databases seen on the righ pane and choose properties – Look for msExchHomePublicMDB and if it has any values clear them. Make sure to clear this values for all the other databases we have.

$_109.jpg

Very IMP note:

This above troubleshooting is applicable only for users migrated from Exchange 2007/2010 to 2013/2016 and not for the below  scenarios in any cases.

1) Issue occurs after the mailbox was moved to a new Exchange site or forest with same Exchange versions Exchange 2010.
3) Issue occurs after Changes were made to the public folder databases in Exchange 2010.
4) Issue occurs after Changes were made to the Exchange server endpoint.
5) Lync wasn’t restarted after the mailbox was moved or after the Exchange server endpoint was changed.
6) You’re running an older version of the Outlook client.
7) The service re-balances mailboxes on databases at various sites.

Thanks & Regards
Sathish Veerapandian
MVP  – Office Servers & Services


OWA Error – There are too many active sessions connected to this mailbox

$
0
0

Recently one of the shared mailbox which resides on Exchange 2016 while trying to access from web mail the users were getting the below error.

This was a shared mailbox accessed by multiple team members.

mm

This issue happened for only one mailbox and it was fine for rest of the users.

Looked into the IIS logs for the affected mailbox and there were multiple connections coming from different sources.

IIS logs location can be found on below location
C:\inetpub\logs\logfiles\W3SVC1

Further looked  into the Event Viewer and found the event id 9646 with the below message for source MSExchangeIS
Client Type OWA Exceeded the maximum objects of 16 per session
So looked into the default connection OWA limit of the mailbox to see default values

The Default value can be seen by running the below command

Get-ThrottlingPolicy

See the values of RcaMaxConcurrency and OwaMaxConcurrency for Global Throttling Policy and the Default Throttling Policy

What is RcaMaxConcurrency ?

The RcaMaxConcurrency is a parameter which controls how many Simultaneous parallel connections an RPC Client Access user can establish against an Exchange server at same time.

These connections are considered when the server receives the request from the user until the connection is closed(Eg: The connection is considered as terminated only when the User closes the browser,goes offline,sign outs)
If users attempt to make more concurrent requests than their policy allows, the new connection attempt fails. However, the existing connections remain valid.

A valid value is an integer from 0 to unlimited. The default value is 40.

What is OwaMaxConcurrency ?

The OwaMaxConcurrency is a  parameter specifies how many concurrent connections an Outlook on the web user can have against an Exchange server at one time. A connection is held from the moment a request is received until a response is sent in its entirety to the requester. If users attempt to make more concurrent requests than their policy allows, the new connection attempt fails. However, the existing connections remain valid.

The OwaMaxConcurrency parameter has a valid range from 0 through unlimited . The default value is 20. To indicate that the number of concurrent connections should be unthrottled (no limit), this value should be set to $null.

Solution:
Create a new policy with some more values for RcaMaxConcurrency and OwaMaxConcurrency and then assign some or all users to that rather than changing the default policy

Create a new Throttling Policy
New-ThrottlingPolicy -Name HighUsage -OwaMaxConcurrency 50 -RcaMaxConcurrency 100

Apply this policy only to the affected users
Set-Mailbox -Identity tonysmith -ThrottlingPolicy HighUsage

There is one more method which will override the default throttling policy which can be applied on the registry but this will be applicable for all mailboxes :

Locate and then click the following key in the registry:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MSExchangeIS\ParametersSystem
On the Edit menu, point to New, and then click DWORD Value.
Type Maximum Allowed Service Sessions Per User, and then press ENTER.
On the Edit menu, click Modify.
Type the decimal value that specifies the number of sessions that you want to use, and then click OK.
Exit Registry Editor.

Since this will be applicable for all mailboxes better to avoid this registry entry.

Note:
For the above behavior as a first step its always better to reach the affected end user , verify from how many devices and PC he has connected, Try to disable and re-enable the owa feature for a while and see the results. If still we keep getting the event id 9646 for the affected user then we can create a throttling policy and assign the user to the policy.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services 



Easy Migration steps from ADFS 2.1 to 4.0

$
0
0

In this article we will have a look at steps to migrate from ADFS 2.1 to 4.0 which has been used for on-premise web based claim aware applications.

Things to perform on the ADFS 2.1 Server :
1) Export the Config Data

The config data can be exported with the scripts located in the \support\adfs folder on the Windows Server 2016 installation CD:

adfs1
Mount the Windows 2016 Media
Export and backup the AD FS configuration data with the below script on a safe folder location
export-federationconfiguration.ps1 -path c:\adfs2backup

2) Export the certificate with the private key from the ADFS 2.1 personal store.

There are few ways to export the certificate along with the private key.We can export the certificate through DigicertUtil along with the private key from the personal store from this ADFS 2.1 Server.
3) Make a note of the account on which the ADFS Windows service is running .
This is very important and required during the installation of ADFS 4.0.

Go to local services ADFS Windows Service – Make a note of the logon account name

4)  Make a note of the edit Federation Service properties

Open ADFS management – Edit Federation service properties – General – Organizational – Events. This is required in the configuration of the federation service on the new ADFS 4.0 farm.

Things to perform on the ADFS 4.0 new server:

1)Import the certificate along with the private key on the new ADFS 4.0 server.

We can use the MMC certificates snap in and import the PFX format certificate that was exported from the old ADFS server. This procedure should be done before installing the ADFS 4.0 role.
2)Install the ADFS services role on this new computer and click configure.

Note:
In AD FS 2.1, we had to download and install the AD FS 2.1 software to deploy the AD FS server infrastructure.
From Windows server 2012 this component is present as a role in the server manager which provided improved configuration wizard which will automatically list and install the services that required during the installation.

a) From the server manager choose ADFS role

b) Select Create the first federation server in a federation farm

adfs1

Select a domain admin account to install ADFS. Its not mandatory to provide the ADFS service account in this page.

adfs1

c) In the next page select the certificate just imported to the personal store

Enter the federation service display name as is it was present on the ADFS 2.1

adfs1

For the service account enter the exact service account name and the password present in ADFS 2.1

adfs1
e) In the database field specify the database either WID database or the new SQL database on this new server according to the configuration.

adfs1

After specifying the database we can click on next post which the ADFS4.0 will be configured successfully .

adfs1

3) Now import the federation data that was exported from the old ADFS 2.1
run import-federationconfiguration.ps1 -path  c:\adfs2backup

After the import configuration is completed we would be able to see the ADFS configuration as is it was present in the previous server.

4)  Enable IDP initiated sign on page by running the below command.

(get-adfsproperties).EnableIDInitiatedSignonPage

Verify the new ADFS Farm:

Verifying the new ADFS farm is very much important before we decommission the old farm.

Make a host entry directly to this  new ADFS 4.0 server which consumes the ADFS service and visit the IDP initiated sign on page and make sure the application is able to reach the IdpInitiatedSignOn.aspx page .

Example below :

https://adfs.exchangequery.com/adfs/ls/idpinitiatedsignon.aspx

Good to Know:

1) ADFS on Windows Server 2012 R2 uses the SNI (server name indication) extension of SSL. This means that we  need to reach the IdpInitiatedSignon.aspx page with the exact URL of the ADFS farm. So if the ADFS server is ADFS01.exchangequery.com with the IP address 10.34.42.11 and the name of the farm is adfs.exchangequery.com, the following apply:

https://adfs01.exchangequery.com/adfs/ls/idpinitiatedsignon.aspx does not work (TCP RST will be sent to terminate the TLS negotiation)

https://10.34.42.11/adfs/ls/idpinitiatedsignon.aspx does not work (TCP RST will be sent to terminate the TLS negotiation)

https://adfs.exchangequery.com/adfs/ls/idpinitiatedsignon.aspx works

2) ADFS 4.0 no longer uses IIS, so do not install IIS as a part of the prerequisite during the installation. ADFS 4.0 can be published via windows server web application proxy server.

3) Windows Server 2016 has the ability to perform an in-place upgrade of Active Directory Federation Services (ADFS) from 3.0 to 4.0. All we need to do is introduce the new ADFS 4.0 in the existing ADFS 3.0 farm (mixed farm) make them primary and then decommission the old 3.0 servers.But this option is not available if we are running ADFS 2.1 farm.

Thanks & Regards 
Sathish Veerapandian
MVP – Office Servers & Services


Quick Tip – Check Enterprise Vault Users

$
0
0

We can use the EV reports to see the active enterprise  vault users.

In addition to that we can use the  SQL  query to check the active users

Enterprise vault is tightly integrated with SQL databases. The Enterprise Vault Directory database will have the configuration information of the archive which will hold the number of exchange mailboxes it has enabled for archive and its details in Enterprise Vault.

But in the EV articles we see 2 values to check always which is :

1) MbxArchivingState –
The MbxArchivingState indicates whether or not the mailbox from Exchange server is enabled for archiving in Enterprise Vault. These are the values which the EV has about the details of the archives which is under its EV organization(directory).

2)MbxExchangeState –
The MbxExchangeState indicates the state of the mailboxes in our Exchange Environment.The EV determines the state of the mailboxes in Exchange servers by this value.

To see active users we can run the below query on SQL :

Use EnterpriseVaultDirectory
Select count(*)
from exchangemailboxentry
where MbxArchivingState = 1

EVL3

To see Disabled Mailboxes we can run the below query on SQL:

Use EnterpriseVaultDirectory
Select count(*)
from exchangemailboxentry
where MbxArchivingState = 2

EVL2

For new Mailboxes eligible for archive please run the below Query:

Use EnterpriseVaultDirectory
Select count(*)
from exchangemailboxentry
where MbxArchivingState = 0

EVL1

We can run the below query to check the mailbox archiving state:

SELECT count(MbxArchivingState) as ‘# Mailboxes’,
MbxArchivingState as ‘Archiving State’
FROM ExchangeMailboxEntry
GROUP BY MbxArchivingState

EVL4

The above Archiving State will display the results in below order:

0 = Not Enabled
2 = Disabled
1 = Enabled
3 = Re-Link

To view the Exchange State we can use the following:

SELECT count(MbxExchangeState) as ‘# Mailboxes’,
MbxExchangeState as ‘Exchange State’
FROM ExchangeMailboxEntry
GROUP BY MbxExchangeState

Untitled

The Exchange State will display the results  in below order:
0 = Normal
1 = Hidden
2 = Deleted

Note:

This MbxExchangeState value will be 0 for hidden mailboxes and they will not be enabled for archive.Inorder to enable them for archive we need to set the value to 2 on the EV by running the below query

USE EnterpriseVaultDirectory
UPDATE ExchangeMailboxEntry
SET MbxExchangeState = ‘0’ WHERE MbxExchangeState = ‘2’

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services


Expanding the Disks on Exchange Databases

$
0
0

This article outlines few tips of extending the storage of exchange servers where it hosts the database and log files.

For physical Server:

Add new disks in the RAID hard-drive bay and use array management utility to install the new disks to  the existing Raid.

Then expand Raid size (using Raid utility) for these disk which needs to be expanded. After this in the Raid config utility we see the extra space.Most of the SAN systems has the option to dynamically extend the disk space allocated for the servers.

So we can check the below thing using the storage management utility that we have based on the type of RAID and storage we have (eg. netapp)

• Checking initial status of the existing drives to make sure they are healthy.
• Inserting new Hard Drives in the available slots in the hard drive bay.
• Check the Status of the new drives in the storage array management utility.
• Initialize the new disks and make it available.
Then use Disk-part and extend the disk in the windows on the Exchange hosts.

Example for extending the presented disk :
Open command prompt, type: Diskpart.exe
In DISKPART prompt, type: Select Volume 1 (Selects the volume.)
In DISKPART prompt, type: Extend Size=50000 (If you do not set a size, in example like 50GB set it will use  all of the presented size).
In DISKPART prompt, type: Exit.

Using Disk Part does not affect the system accessing the data and can be done anytime.

For VMWare:

Expand the volume size of Exchange database partition from the vSphere client.
After this the additional space will be reflected immediately on the Exchange servers on diskmgmt.msc.

Expansion of the Exchange database or log drives  in VMWare is seamless however to be safe its always recommended to have a good backup in place before making this change.

Extend the database partition on the VMWare.
Extend the Presented disk in disk management.
If the disks are assigned for VM make sure they are thick provisioned.

Most of the hosted LUNs (eg like from netapp and others) can be grown and shrunk without a single problem on the application side and other vendors are the same.

Using Disk Part does not affect the system accessing the data and can be done anytime.

For hyper v :

Switch-over all databases to one server.
Shutdown the server.
In Hyper-V, increase the disk size of all database disks.
Start the server.
After this we need to Expand the Disk in the disk manager before you move the databases back.
Move the databases back to the activate on preferred node.
Repeat for the remaining servers.

Additional tips:

1) If the primary Mailbox database is increasing its better to have a de-duplication archival solution in place which will manage the storage increase efficiently.
2)Make sure all the new presented exchange drives are MBR formatted.
3)If we are extending the disks for DAG then we need to extend the disks for all DAG members hosting the copies.
4)In larger deployments where we host multiple copies in DAG its always better to have the database disks aligned in the Mount Points only.
5)Dynamic expansion of the VHDx files are supported. Older method of Dynamically expanding VHD’s not supported.
6)Always use the file system as REFS for Exchange 2016 only for Exchange DB’s & logs. Use NTFS for Exchange binaries.
7)Microsoft recommends to use the partition structure as GPT since GPT Is a newer standard  supporting up to 128 partitions in windows and is gradually replacing MBR. MBR type partitions are still supported. MBR only works with disks up to 2 TB in size
8)Better to have a healthy backup before starting these procedures.
9)For VMWare partition expansion ensure that these VM’s are not in snapshots before extending the VMDK files.
10)Better to perform this operation on a low I\O operations period on the array.
For DAG members better to expand the disks one by one on their copies see the results and then proceed.

Thanks & Regards
Sathish Veerapandian
MVP – Office Servers & Services


Compliance Search in Exchange 2016

$
0
0

Till Exchange 2013 we were using the Search-Mailbox to delete any suspicious spam emails circulated in the organization.

From Exchange 2016 there is a new component New-ComplianceSearch introducted for performing this action.
In exchange 2016, New-ComplianceSearch cmdlet was introduced to search and delete messages. There are no limits for the number of mailboxes in a single search when using New-ComplianceSearch. If you use Search-Mailbox, you can only search a maximum of 10,000 mailboxes in a single search.

Still the Search-Mailbox is applicable and working for Exchange 2016 servers as well.

Example to create compliance search:
New-ComplianceSearch -Name “New Phishing Message” -ExchangeLocation “All”

NCS

Allowed parameters are few of them but we require these two at-least for better search:

ContentMatchQuery – The ContentMatchQuery parameter specifies a content search filter and uses the KQL – keyword query language syntax

Example :

New-ComplianceSearch -Name “Remove Phishing Message” -ExchangeLocation “All” -ContentMatchQuery “‘virus’ AND ‘your account closure'”

ExchangeLocation – This parameter specifies the location to look for the search

Accepted values are:
Specific Mailbox can be mentioned.
A distribution group can be mentioned.
All – When we specify all it looks for All mailboxes.

Force – After specifying this parameter only the command executed . Not sure why this was the case.

Also there is an  option  to modify the created one by using Set-ComplianceSearch cmdlet

IMPNote:
When a new compliance search is created a shadow in-place ediscovery search will be created in In-Place eDiscovery & Hold page in the EAC like below.

NCS1
But the status will not be started and we can see this by running Get-MailboxSearch as well.

Microsoft recommends to delete this autocreated shadow In-Place eDiscovery search.
Instead run the Microsoft provided script in New-ComplianceSearch page that will convert an existing compliance search to an In-Place eDiscovery search

So when we run Get-ComplianceSearch we need to see the Compliances that we created

But When we run  Get-MailboxSearch We should not see any shadow in-placediscovery which was created f0r them.

In short below will be the procedure:

  1. Create a new compliance search.
  2. Remove the shadow in-placediscovery created for the new compliance search.
  3. Run the script provided in step 3 in this technet article – Compliance Search
  4. Start the In-Place eDiscovery search – Start-MailboxSearch
  5. Create an In-Place Hold
  6. Copy the search results
  7. Export the search results
  8. Use New-ComplianceSearchAction -SearchName “Remove Phishing Message” -Purge -PurgeType SoftDelete and delete the message

Tips:

When we run the compliance search ps1 script provided by microsoft we should enter the value of the new compliance we created as below

NCS3

While creating the inplace hold better to enter the values of all the available fields

NCS5

Once the search completed there is an option to preview the search results through delegated admin account.

 

After that the data can be exported as PST.

NCS9

Post that the New-ComplianceSearchAction command should be used to remove the emails.

Note:

  1. New-ComplianceSearch limits to deleting 10 emails per mailbox at once on a single command, though there is no limits on number of mailboxes to search.
  2. Search-Mailbox limits to deleting 10000 emails per mailbox on at once on a single command.
  3. New-MailboxSearch will be depreciated soon on future updates most likely , since this command will no longer be available on Office 365 from July 2017 as per technet source.

Thanks & Regards
Sathish Veerapandian
MVP -Office Servers & Services


Exchange log the real client IPs in the IIS hit logs for SNAT load Balancing

$
0
0

In most of the cases we would like to know the Email client authentication attempts from external sources along with their source IPs.

It can be in below scenarios:

1) Frequent account lockouts happening for an email user where we would like to know the source host causing the account lockout.
2) Security team would like to collect the logs with the real ip for any future investigation for a compromised account.

In most of the cases exchange services are published through load balancer and servers are behind the load balancers. When Exchange is load balanced at layer 7, it will become non-transparent. Due to this the the actual client source IP address is replaced by the load balancer’s own IP address, and therefore ONLY this address will be recorded in the IIS logs.
As a result of this the Microsoft IIS client logs in the Exchange  for each client connections will have the assigned load balanced IP recorded rather than the actual source IP.

For example if the exchange services are published via SNAT  through a load balancer like KEMP, F5 etc.., the IIS logs  cannot get the real source ip. Because in a SNAT, the destination IP address is maintained but the  actual source IP address is changed.

Example of SNAT :

SNAT

When a packet passes through a NAT device Either source or destination IP address is changed/modified according to the type of NAT it is using. However the information about these changes made to packets are maintained in NAT device’s connection table

There is an option in the most of load balancers like KEMP , F5 to create an X-Forwarded header and enable them.

Once done The X-Forwarded-For header option when enabled will capture the source address of the client and append it in the header.

After this we need to add an extra value in the advanced logging module on all exchange servers to enable to log this real IP on the IIS logs.

Enable Advanced Logging on all Exchange 2016 Servers perform the below:

The first task is to deploy the Custom Logging role service. If we do not deploy this role service, we may receive an error  “Feature not supported” error when trying to edit the custom log definition.

To enable the Custom Logging role service in Windows server   2012  R2 & 2016 :
1. Open Server Manager.
2. Click Add Roles and Features.
3. In the Add Roles and Features wizard navigate to Custom Logging Role which  is under the Web Server > Web Server > Health and Diagnostics category.
4. On the Confirmation page, click Install.

Now Open IIS Manager- Select Logging

Untitled.png

 

Select Fields

Untitled1

 

Create a new custom field-

Field Name – we can give any name so that it will reflect on the logs as new column

Source Type – Request Header

Source – X-FORWARDED-FOR

Untitled2
Perform an IIS reset after this.Now we will start seeing the IP address of the client PC’s in our IIS logs rather than the IP of the load balancer.

 


Viewing all 199 articles
Browse latest View live