Quantcast
Channel: msexchangequery
Viewing all 199 articles
Browse latest View live

Unable to open PST file . Error Details: Header File length is zero. If this file is from a previously failed PST export, please delete the file and resume the export

$
0
0

We might come across the below error in the PST import/export

Error code: -2146233088

Unable to open PST file ‘\\fileshare\Archive\testuser.pst’. Error details: Header file length is zero. If this file is from a previously failed pst export, please delete the file and resume the export. –> Header file length is zero. If this file is from a previously failed pst export, please delete the file and resume the export.

There can be so many issues causing this factor and below tips can be helpful:

1) The mailbox import/export uses the Microsoft Exchange Mailbox Replication service on the CAS server. When a  import/export request is triggered a remote powershell connections will be established from the source CAS to the appropriate destinations to the shared folder to initiate this process. So better to have the shared location network drive in the same VLAN where the exchange is hosted and this will speed up the import/export option.

2) Restart Microsoft Exchange Mailbox Replication service – Since the MRS service is handling this job and if the MRS is stuck processing the huge jobs a restart of this service will definitely help  to speed up the migration process.

3) Remove the Failed import/export requests with the below commands

Get-MailboxExportRequest -Status Failed | Remove-MailboxExportRequest

Get-MailboximportRequest -Status Failed | Remove-MailboximportRequest

4)  We can run this command import/export to exclude and skip few errors

For Import –
New-MailboxImportRequest  -Mailbox  -filepath ‘\\fileshare\Archive\testuser.pst’ -Baditemlimit unlimited -AcceptLargeDataLoss -Priority High -AssociatedMessagesCopyOption Copy  -Confirm:$false  -ConflictResolutionOption KeepLatestItem  -ExcludeDumpster
For Export-
New-MailboxExportRequest  -Mailbox  -filepath ‘\\fileshare\Archive\testuser.pst’ -Baditemlimit unlimited -AcceptLargeDataLoss -Priority High -AssociatedMessagesCopyOption Copy  -Confirm:$false  -ConflictResolutionOption KeepLatestItem  -ExcludeDumpster

5) Also better to check the free space available on the shared network drives where the PST export is happening.  Also better to see the free space available on the disk where the database resides from where the PST export/import is happening.

6) If we are experiencing a mailbox import/export for a specific user a mailbox repair might also help. We can perform the mailboxrepair with the below command.

New-MailboxrepairRequest –Mailbox “usernanme” –CorruptionType ProvisionedFolder,SearchFolder,AggregateCounts,FolderView


Quick Tip – Reduce the amount of Mailbox Audit log information generated by a service account

$
0
0

Usually we enable Mailbox auditing to monitor actions taken by mailbox owners, delegates and administrators. But we do not require mailbox audit to be enabled for service accounts which are actually doing genuine operations.

We can configure mailbox audit logging bypass for service accounts which are configured in applications and access mailboxes frequently. This will Reduce the amount of audit log information generated by a service account.

Below steps can be performed to bypass audit for the service accounts:

To check the mailbox audit bypass we can run the below command

Get-MailboxAuditBypassAssociation -identity serviceaccount

The main parameter we need to look is AuditByPassEnabled.

The default value will be false for mailboxaudit enabled and disabled account.

AP

The AuditBypassEnabled parameter controls if the audit logging is enabled or disabled for this account.
When the value is set to $True this account will have the maiboxaudit disabled.
When the value is set to $false this account will have the maiboxaudit enabled.

We can run the below command to bypass the mailbox audit logging for service account.

Set-MailboxAuditBypassAssociation -Identity “service.crm” -AuditBypassEnabled $true

IMP Note:

By default the mailboxaudit logging is not enabled for newly created mailboxes and existing mailboxes.

We can check the mailboxaudit if its enabled or not with the below command.

Get-Mailbox usermbxx | fl *Audit*

The default value will be false like below and the default audit log age limit is 90 days.

AD
Below script can be used to enable bulk maibox audit based on OU level

The Script can be downloaded here – EnableMailboxAudit

##############################################################
# Description:
# This script enables the Mailbox Audit for new mailboxes in your Organization on OU level.
# You need to make them run on a task scheduler on a weekly basis for new mailboxes audit to be enabled.
# You need to mention the OrganizationalUnit in the script where the mailboxes are present.
# You need to mention the CSV location in Export-Csv.
# You need to mention To address From address and SMTPserver(exchangeserver) for sending this report in email.
################################################################

add-pssnapin Microsoft.Exchange.Management.Powershell.E2010 -ea SilentlyContinue
add-pssnapin Microsoft.Exchange.Management.Powershell.Support -ea SilentlyContinue
$mbxs = Get-Mailbox -OrganizationalUnit “mention OU Name” | where { $_.auditenabled -eq $false } | Select Name, DisplayName, UserPrincipalName,SamAccountName,PrimarySMTPAddress
$mbxs | Export-Csv C:\temp\auditlogs\Audit.csv -Encoding UTF8
$mbxs | % { Set-Mailbox $_.SamAccountName -AuditEnabled:$true -AuditAdmin Copy, Create, FolderBind, HardDelete, MessageBind, Move, MoveToDeletedItems, SendAs, SendOnBehalf, SoftDelete, Update }
$mbxs | % { Set-Mailbox $_.SamAccountName -AuditEnabled:$true -AuditDelegate Create, FolderBind, HardDelete, Move, MoveToDeletedItems, SendAs, SendOnBehalf, SoftDelete, Update }

Send-MailMessage -To emailadmin@domain.com -From reports@domain.com -Subject “Audit Enabled for the attached users” -Attachments C:\temp\auditlogs\Audit.csv -SmtpServer specifysmtpserver -Port 25025 -BodyAsHtml -Body “Audit Enabled”

***************************************************

Thanks 
Sathish Veerapandian

Renew SSL certificate for ADFS URL

$
0
0

This document outlines the steps to renew the SSL certificate for ADFS claims providers federation metadata URL

1) To take the application ID and the certificate hash run the below command.

netsh http show sslcert

ADFS1

copy only application id value. This we require for the certificate renewal. Better to take a copy of this results.

2) Run this command to see the ADFS listners

netsh http show urlacl 

ADFS2

This is just to take a copy of the ACL url’s before the certificate renewal. This part is so sensitive because ADFS will have some URL reservations in the HTTP.SYS. This will help us just in case if we face any issues after the certificate renewal.

3) Delete the old certificates –

$Command = “http delete sslcert hostnameport=adfs.exchangequery.com:443”
$Command | netsh

$Command = “http delete sslcert hostnameport=adfs.exchangequery.com:49443”
$Command | netsh

$Command = “http delete sslcert hostnameport=localhost:443”
$Command | netsh

$Command = “http delete sslcert hostnameport=EnterpriseRegistration.exchangequery.com:443”
$Command | netsh

4) Delete the old hostIP and port entries:

$Command = “http delete sslcert hostnameport=0.0.0.0:443”
$Command | netsh

5) Now we can add the new certificates:

Prerequisite:

Take the APP id which was noted down in the step 1

Take the certificate Hash – This can be taken from the new certificate thumbprint

example below –  remove all the spaces and copy the new certificate hash value.

ADFS3

# APP ID
$guid = “paste the appid here”

# Cert Hash
$certhash = “paste the certificatethumbprint”

To renew actual metadata URL:

$hostnameport = “adfs.exchangequery.com:443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY sslctlstorename=AdfsTrustedDevices clientcertnegotiation=disable”
$Command | netsh

To renew localhost:

$hostnameport = “localhost:443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY sslctlstorename=AdfsTrustedDevices clientcertnegotiation=disable”
$Command | netsh

To renew Device Registrations:

$hostnameport = “adfs.exchangequery.com:49443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY clientcertnegotiation=enable”
$Command | netsh

The above is required because Changes were made in ADFS on Windows Server 2012 R2 to support Device registration and happens on port 49443.

$hostnameport = “EnterpriseRegistration.exchangequery.com:443”
$Command = “http add sslcert hostnameport=$hostnameport certhash=$certhash appid={$guid} certstorename=MY sslctlstorename=AdfsTrustedDevices clientcertnegotiation=disable”
$Command | netsh

The above is also  required for device registration service.

Hope this helps.

Update NTP server in Linux Application

$
0
0

We use the NTP protocol to sync the time of servers,network devices, client PC’s with our local time zone to keep the correct time over the network. This can be accomplished through a NTP server configured locally in our network which will have the capability to receive and update the local time from the satellites in space.

The time which they get updated will be set as a bench mark over the entire machines over its network if this machine is configured as NTP server for them. This article focusses on updating the local NTP server on linux application.

To See Current Date –

Putty ssh to the server and run –    date

To check the ntp service status run –   service ntpd status

NTP1

To set your as NTP server  to get up to date time from them  run below  –

ntpdate ntpserverfqdn

Example – ntpdate ntp.exchangequery.local

Once after we updated we will get the below message

ntpdate Step time server offset sec
ntpdate adjust time server offset sec

To sync hardware clock –

hwclock –systohc

Reason to run above command: There are 2 types of clocks in Linux Operating systems.

1) Hardware clock – is the battery powered “Real Time Clock” (also known as the “RTC”, “CMOS clock”) which keeps track of time when the system is turned off but is not used when the system is running.

2) System clock-  (sometimes called the “kernel clock” or “software clock”) which is a software counter based on the timer interrupt.

This  above command will Set the Hardware Clock to the current System Time which will update from the local ntp server in our environment.

Note: We have an option to set the Hardware Clock from the System Time, or set the System Time from the hardware Clock.

Finally we need to Add the NTP server in the ini.config

Navigate via VI to ntp.conf location –  vi /etc/ntp.conf

vi /etc/sysconfig/ntpdate

NTP

Finally restart the ntp service –

service ntpd  restart

There is another option to update the servers from the website ntp.pool.org

We can go to this official ntp pool site and choose our continent area servers.

In order to update them VI to the ntp location :

vi /etc/ntp.conf

We can see the default ntp servers like below. We can comment them and need to update with the correct servers for the respective country where the server is hosted.

NTP2

In my example updating with my local time zone as below and commenting the default ones.

NTP3

After the above is completed the servers will be updated.

We can check the ntp peers synchronization with the below command

ntpq -p

Based on our requirement we can set the ntp server to be our local ntp server or one from the local time zone and after this the linux server will be updated with the latest current local time zone.

Thanks & Regards
Sathish Veerapandian

Inbox folder renamed to Archive

$
0
0

One of the user reported that the inbox folder was renamed to archive

There is one possibility of when the user has clicked on archive folder by accidentally by highlighted inbox – clicking on archive and then use create archive folder or choose existing folder.

 

 

while troubleshooting found this is a known issue and there is an article released from Microsoft

https://support.microsoft.com/en-us/help/2826855/folder-names-are-incorrect-or-displayed-in-an-incorrect-language-in-ou 

As per Microsoft this  can also occur if   a mobile device with different application like MDM  or a third-party server application synchronizes the Exchange Server mailbox.It could have been caused by a malfunctioning add-in.

If the default Inbox folder is changed unexpectedly to Archive we need to directly skip to step 4 and not look into step 1,2&3.

Use the step4  with MFCMapi tool to fix this problem.

Once the step4 is completed we need to run resetfolder as below

outlook.exe /resetfoldernames

After performing the step 4 we can run the below command and make sure the inbox folder in the root folder path is renamed correctly and not present as archive.

Get-MailboxFolderstatistics mbxname | select name,folderpath,foldersize,itemsinfolder

Thanks & Regards 

Sathish Veerapandian

There has been an error installing the Enterprise Vault Cloud Storage Adapter Components

$
0
0

During the upgrade of the Enterprise Vault  which is in Windows cluster was getting the below error.

EVError

While looking into the event view in the application log on the affected node can see the below error message.

EVError

We can also see the below error message in the EV installation logs

Machine policy value ‘DisableUserInstalls’ is 0

Installation Log Folder can be seen in Ev installation directory with the format EVInstall.date.time.log

Solution:

Creating the registry below registry value will fix the issue :

HKEY_LOCAL_MACHINE\SOFTWARE\KVS\Enterprise Vault\CloudStoragePlugins\Install

What is this Cloud Storage Plugins ?

We can Use the Enterprise Vault Administration Console to enable and configure most of the cloud Storage Service as secondary storage for our store partitions.

To enable this – Open Vault Admin Console – Navigate to Store partition – properties

Select the store which we need to have secondary storage .

First we need to click on collections – select enterprise vault.

Then we need to click on migration – select migrate files

Caution: If you use secondary storage that is slow to respond, some Enterprise Vault operations that access this storage will take a long time. For example, both tape and cloud storage can be very slow. We get this warning as well during enabling this service.

CLS1

Then in Migrate files we can select on the cloud storage subscription we are having   and apply. Also there is an option to remove the collection files from primary storage after they have been migrated.

CLS2

Later in cloud storage service properties we can provide the service name, class, secure access ID and other options.

Thanks & Regards
Sathish Veerapandian

Performing Veritas Enterprise Vault Upgrade for Exchange Environment

$
0
0

Upgrade on Veritas Enterprise vault will very according to the setup.

If its on a single node the upgrade will be easier.
If it is on Veritas Cluster the before the upgrade few factors needs to be taken  care.
If it is on Windows cluster the before the upgrade few factors needs to be taken  care.

In this article we will have a look at performing Enterprise Vault Upgrade on Windows server failover Cluster.
Also if we are upgrading from 11.x.x lot of things needs to be taken care because:

12  = Major release
12.X = Minor release
12.x.y = Maintenance release

Readiness before upgrading to EV version 12.0 from 11:
1) EV 12.x and above requires Windows Server 2012, hence if you are running older OS  version on the current EV version, you will have to migrate to a new server
2) EV 12 version supports Outlook 2016 on the server with the below conditions
If its Outlook 2016 exchange connection must be MAPI/HTTP and not RPC/HTTP
4) It Supports only SQL2012 and above. If we have SQL 2008 then we need to migrate to atleast SQL 2012

Note:
Enterprise Vault does not provide the high availability upgrades meaning we cannot perform the upgrade when the system is active and accessible via the passive node.The upgrades must be completed on all the nodes in the cluster before we start the Enterprise vault services after the upgrade. The system will be down and not accessible during the upgrade. So better to plan and perform this upgrade on a weekend.

Below things needs to be done prior to the upgrade:

  1. Stop all the task controller services. No Archive must be initiated or running. Stop all the jobs and make sure no jobs are running for any mailbox servers.
  2. Backup your Enterprise vault Server , data and the SQL stores.
  3. Clean the queues and the queues must be empty. There is a procedure to clean up the queues if the EV is running on Failover Cluster
  4. Unload the Antivirus on the EV nodes.
  5. Ensure no Backup , SQL, Nodes are happening during this time.
  6. Supported Outlook client on the EV server –

The Following Versions of Outlook Running on the Server are not supported:
Outlook 2013 SP1 64 bit version
Outlook 2013 Original Release
Outlook 2016 (64-bit version)

Only the below Versions of Outlook on the EV server are supported:

Outlook 2013 SP1 (32 bit version)
Outlook 2016 (32 bit windows installer, available with volume license)

If we need to upgrade the outlook on the EV nodes perform the following:

Stop the EV admin cluster resource service from the failover cluster.
Install the supported version of Outlook.
Restart all the EV Services.

Upgrade EV on a Windows Server Failover Cluster on EV Nodes:

Before we run the upgrade we need to run the deployment scanner to check required software and settings.
Inorder to perform that
Load the Media – run setup.exe – click enterprise vault – click server preparation

0Untitled

Untitled1

Once the deployment scanner is completed, if  the prerequisites are successful we can see the results like below.

01Untitled

If they are not successful we might get like below and need to correct steps as  mentioned in the report.

Untitled3

Logon to the Active Node with the vault service account and bring the admin service resource offline. If there are multiple sites make sure they are also stopped.

Load the media and run the setup. Make sure no MMC are open in the server.

Click – server installation and select upgrade existing server.

Select only EV services,Admin console, search access components, operations manager and reporting only if we have exchange integrated with EV.

Click install.

Untitle21

Once the setup is complete on the active node we will get screen like below . Better to restart after the upgrade  completes on all other nodes, SQL and indexes.

Untitled9

Steps to upgrade the DB (Directory, Monitoring & Audit):

Logon to the active node with vault service account.
Open the Enterprise Vault Managment Shell.
Run the command Start-EvDatabaseUpgrade -Verbose.
Once the upgrade is complete we can see dbupgrade subfolder where we can see the logs of the db upgrade and mandatorily they must be verified.

Once the DB is upgraded and the upgrade is completed on all the nodes, we can go to the failover cluster and bring EV Admin Server Resource online.

Additional Requirements based on setup:

  1. Upgrade the EV reporting component.
  2. Upgrade the MOM & SCOM management pack and delete the previous management packs.
  3. By Default EV deploys the Exchange server forms to users computer automatically.If the forms from organizational library is used then thhe Exchange server forms needs to be upgraded.

Thanks & Regards
Sathish Veerapandian

Microsoft Cosmos DB features,options and summary

$
0
0

This article gives an introduction on Microsoft Cosmos DB, features available in them and options to integrate with application.

Introduction:

CosmosDB is the next Generation of Azure DB,its a enhanced version of document db.
Document DB customers, with their data, are automatically Azure Cosmos DB customers.
The transition is seamless and you now have access to all capabilities offered by Azure Cosmos DB.

Cosmos DB is a planet scale database.It is a good choice for any server less application that needs low order-of-millisecond response times, and needs to scale rapidly and globally. They are more transparent to your application and the config does not need to change.

How It was derived:

Microsoft Cosmos DB isn’t entirely new: It grew out of a Microsoft development initiative called Project Florence that began in 2010. Project Florence is a speculative glimpse into our Future where both our Natural and Digital worlds could co-exist in harmony through enhanced communication.

Picture1

  • It was first commercialized in 2015 with the release of a NoSQL database called Azure DocumentDB
  • Cosmos DB was introduced in 2017.
  • Cosmos DB expands on it by adding multi-model support, global distribution capabilities and relational-like guarantees for latency, throughput, consistency and availability.

Why Cosmos DB?

  • It has no Data Scheme and schema-free. indexes all the data without requiring you to deal with schema and index management.
  • It’s also multi-model, natively supporting document, key-value, graph, and column-family data models.
  • Industry first Globally distributed, horizontally scalable, multi-model database service. Azure Cosmos DB guarantees single-digit-millisecond latencies at the 99th percentile anywhere in the world, offers multiple well-defined consistency models to fine-tune performance, and guarantees high availability.
  • No need to worry about instances, servers, CPU , Memory. Just select the throughput , required storage and create collections. CosmosDB works based only on throughputs. It has integrations with Azure functions. Serverless event driven solution.
  • API’s and Access Methods- Document DB API,Graph API (Gremlin),MongoDB API,RESTful HTTP API & Table API. This gives more flexibility to the developer.
  • They are elastic Globally scalable and with HA , Automatically indexes all your data.
  • 5 Consistency concepts – Bounded Staleness, Consistent Prefix,Session Consistency,Eventual Consistency,Immediate Consistency.  Application owner has now  more options to choose between consistency and performance.

Summary on Cosmos DB:

Picture2

Example without Cosmos DB:

  • Data Geo replication might be a challenge for the developer.
  • Users  from remote locations might experience latency and inconsistency in their data’s.
  • Providing an automatic failover is a real challenge.

Picture3.png

Example with Cosmos DB:

  • Data Can be Geo-Distributed in few Clicks.
  • Developer do not need to worry about the data replication.
  • Strong consistency can be given to the end users across geo-distributed location.
  • Web-Tier application can be  changed anytime between primary and secondary in few clicks.
  • Failover can be initiated any  time manually and automatic failover is present.

Picture5

Data Replication Methods:

  • Replicate Data with a single click – we can add/remove them by a single click.
  • Failover can be customized any time in few clicks(automatic/manual).
  • Application does not need to change.
  • Easily move web tier and it will automatically find the nearest DB.
  • Write/Read Regions can be modified any time.
  • New Regions can be added/removed any time.
  • Can be accessed with different API’s.

Existing data can be migrated:

  • For Example if we already have a mongo app we can just import and move them over.
  • Just copy the mongo data into the cosmos and replacing the URL in the code.
  • We can use Data migration Tool for the migration.

5 Consistency Types:

There are 5 consistency types where the developer can choose according to  the requirement.

  • Synchronous – eventual consistent End users get the best performance.(but data will not be consistent)
  • Strong – will only commit the database to the write/read regions after the copy is successful.(consistent data across all regions)
  • Bounded –  Option to set Bounded staleness to 2 hour. If it is set to 0 then it becomes strong consistency.(We can select few interval up to which the consistency can be strong till the replication is completed to read regions)
  • Session – It is synchronous but not consistent for all users. Clients who commits the data can see the fresh data.
  • Consistent Prefix – Copy of the data order will be maintained and they will see the uniform data.

Based on these 5 consistency concepts, the application developer can  decide to choose either to give the best performance or a consistent data to the end users.

Example of Eventual Replication:

The data is not consistent for the read region and users in write region alone can see the fresh data.

Picture6

Replicate Data with a single click:

Provides more regions to replicate just in few clicks which are more than Amazon and Google combined.

Picture7.png

Available API Methods:

Picture8

Recommendations from Microsoft:

  • According to Microsoft, Cosmos DB can be used for “any Web, mobile, gaming and IoT applications that need to handle massive amounts of reads and writes on a global scale with low response times.
  • ” However, Cosmos DB’s best use cases might be those that leverage event-driven Azure functions, which enable application code to be executed in a serverless environment.
  • Its not a relational database.Its not a SQL server not good at random joins . Does not matter what value of data it is as long as you don’t do joins.
  • Minimum is 400RU per collection, which would be around 25 USD / month. Each Collections are charged individually, even if they contain small amounts of data. Need to change your code to put all of documents into one collection.
  • It’s a “NoSQL” platform with SQL on top of it for SQL operations better not to do multiple joins.

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & services


Create Cosmos DB , failover options,data replication options from azure subscription

$
0
0

This article outlines the steps to create Cosmos DB from the azure subscription.

  1. login to azure portal – Click on Azure Cosmos DB – Create Cosmos DB
  2. Type the document ID – keep this in mind this document ID is the URL we will be using  as connection string in the application.
  3. Select the preferred API according to your requirement.
  4. Choose the azure subscription, select the resource group
  5. Choose the primary location where the data needs to be replicated.There is an option to enable geo redundancy which can be done later as well.

Picture9

To Enable Geo-Redundancy-

Click on – Enable Geo Redundancy – and choose the  preferred region.

Picture10

Replicate data globally in few clicks –

Picture11

Failover options –

There are 2 failover options Manual and automatic.

Picture12
Manual can be triggered any time – we just need to select the disclaimer and initiate failover.

Picture13

Add new regions any time and replicate your data in few minutes-

Picture14

Failover options – Automatic

we need to go and enable the automatic failover as below

Picture15

Also there is an option to change the priorities of the failover in few clicks.Good part is can be done any time and we do not need to change them on the code.

Picture16

Consistency levels:

Can be modified and altered any time. The default consistency type is session as below.

Picture17

Network Security for the Database:

We have an option to access the database only from few subnets. This gives a complete security to the document. A VAPT can be initiated after setting up this security which eases the database admin job on considerations on data security.

Picture18

Endpoint and Keys to integrate with your code:

We need to use the URI and the primary key to integrate with the code. This can be seen by clicking on the keys section on the left side.

Picture19.png

Summary:

Now a Cosmos Database is created  –  now create new Collection-  create documents –  Then they are stored in JSON rows. Try to have most of the  documents under one collection, because the pay model is per collection.

Create collection:

Click on Add collection

Picture21.png

Create new Database ID – then Collection ID

Remember the collections in Cosmos DB are created in the below order.

Picture23

Now we need to choose the throughput and the storage capacity. They will be charged according to the selection. Also there is an option to choose unique keys which adds more data integrity.

Picture22

Example of a new document

Picture25

Better to define a document ID and collection ID.

Picture24

Once the above is done, we can connect via preferred available API  to your document and the developer do not need to worry about data schema , indexing, security.

More sample codes in GitHub:

https://github.com/Azure-Samples/azure-cosmos-db-documentdb-nodejs-getting-started

Example below:

Before you can run this sample, you must have the following prerequisites:

◦An active Azure Cosmos DB account.
◦Node.js version v0.10.29 or higher.
◦Git.

1) Clone the repository.

Picture26
2) Change Directories.
3) Substitute Endpoint with your primary key and endpoint.

Picture27
4) Run npm install in a terminal to install required npm modules
5) Run node app.js in a terminal to start your start your node application

Thanks & Regards
Sathish Veerapandian
MVP – Office servers & Services.

Steps to renew the SSL Service Communication certificate in ADFS server

$
0
0

This article explains types of certificates present in ADFS server and  the steps to renew the SSL service communication certificate from ADFS server.

Basically there are 3 types of certificate required for ADFS certificate-

  1. Service Communication certificate – This certificate will be used for the secure communications between the web clients(web clients,federated servers,web application proxy and federated server proxy).The service communication certificate will be issued to the end users when they are redirected to the ADFS page by the application. Its always recommended to have a public SSL for this service communication certificate because it needs to be presented to the end users when redirected to ADFS page.
  2. Signing Certificates- Signing certificates will be used to sign the SAML token.When signed all the  data within the token will be readable in clear-text but when the consumer receives the token it knows that the token has not be tampered from the source. If it finds them to be tampered then it will not accept.Token Signing Can be done only with private portion which only the ADFS server will be having.
    This is the certificate used to sign only the SAML tokens.Token validation will be done with public portion of this certificate which will be available in the ADFS metadata. ADFS certificates will have one default self signed signing certificate which has validity of 1 year and this can be extended. Or we can generate one from internal CA and assign them.
  3. Token Decryption Certificate-  This certificate will be used when the application will be sending the encrypted tokens to the ADFS server.With this it will not sign the token but only encrypt the token. The application will encrypt the token by using the public part of the token decryption certificate. The ADFS server only will be having the private part of the key which it will be using to decrypt the token. ADFS certificates will have one default self signed token decryption certificate which has validity of 1 year and this can be extended. Or we can generate one from internal CA and assign them.

We can see the public certificate from the published ADFS  metadata.

Access the metadata url in browser. look for X509 that has values that ends with “=” sign.It’s base64 encoded so it will normally end with an “=” sign.

Testtr

We can see multiple x509 values. The public certificate is  base64 encoded so it will normally end with an “=” sign at the end like example below.

Testtr1

 

Once after we save them in .crt format we can see the public certificate which will be present in the ADFS metadata URL. So by using this the application will encrypt the token and send them to ADFS server. The ADFS server in turn can decry-pt this by using this certificate private key. This certificate private key will be present only with the ADFS server.Just in case if this private key is compromised then anybody can impersonate as your ADFS server

Testtr1

We can more or less verify the encryption on our own to get a better understanding of how it works.
When we do a SAML-trace in Firefox developer edition against a Relying Party we have with ADFS when we check the SAML-token, we will see that the saml:p response to the integrated service provider will be encrypted.

Below steps can be followed to renew the communication certificate

  1. Generate CSR from ADFS server. This can be done via IIS.
  2. Get the certificate issued from the public CA Portal.
  3. Once certificate is issued, add new certificate in Certificate store.
  4. Verify Private Key on the certificate. Make sure new certificate has the private key.
  5. Assign Permissions to the Private Key for ADFS service account. Right click on the certificate, click manage private keys, add ADFS service account and assign permissions as shown in below screenshot.

Untitled

6. From ADFS console select “Set Service Communication Certificate”

7.Select new certificate from prompted list of certificates.

To renew the SSL certificate for ADFS claims providers federation metadata URL can follow the previous article – https://exchangequery.com/2018/01/25/renew-ssl-certificate-for-adfs-url/

 

Configure Enterprise Vault Server Driven PST migration

$
0
0

This article outlines the steps to perform a bulk import of the PST files to large number of mailboxes Archive in Enterprise Vault.

There are few methods to perform the server driven migration in enterprise vault and we will cover one option using the PST task controller services.

Prerequisites:

A csv file with below information needs to be prepared for feeding the data to the Enterprise vault personal store management.

Untitled

Untitled

Where –

UNCPath – path of the pst files. Better to keep them in the Enterprise Vault server which will speed up the migration.
Mailbox – Display Name of the Mailbox of this associated EV archive.
Archive – Display Name of this Archive.
Archive Type – Exchange Mailbox  since its  associated with Exchange mailbox.
Retention Category – Can choose based on requirement
Priority – Can choose based on requirement.
Language – Can choose based on requirement
Directory server – Choose the corresponding directory server.
Site Name – Choose the corresponding site name.

Once the csv file is ready , we need to import the data via personal store management, by choosing multiple and feeding the CSV file.

Untitled

Untitled

Once  imported we can see the summary of successfully imported CSV files.

If its unable to find any associated archives in the csv file it will give an error message only for them and we have an option to export them as csv files.

Untitled

After this import is successful we can see the list of successfully imported files with below information.

Untitled

Now we have provided the EV  with the required data to migrate to this associated archive. Now we need to create PST collector Task ,PST migrator task, PST Migration Policy.

After this we need to create PST holding folder by right clicking on EV site properties and specifying the location. This PST holding folder is a temp location used by EV to copy the actual PST files from the UNC path and perform the import.

This is done because when EV tries to import a pst and if its failed then that pst can no longer be used. After the migration is complete it will automatically delete these files based on the PST migration policy that we have configured.

Untitled

After this configure the PST migration policy –

We need to ignore the client driven settings here , because we are performing a server driven migration by providing the Pst files via csv file.

Untitled

There is option to set the post migration configuration of pst files. Its better not to use this option until the complete migration task is over and we get confirmation from end users.

Untitled

There is a very good option to send email notification post migration.

Untitled

After this we need to create PST collector Task

untitled13

This setting is very important to specify the maximum number of pst to be collected in the holding area. We can set this value based on our requirement.

Untitled

We should schedule the collector task schedule, probably after office hours.

Untitled

Configure the Migrator Task

once this is done we need to configure the pst migrator task

untitled16

we need to configure temporary file location for the pst file to start the migration

Untitled

Also we have the option of number of PSTs to migrate concurrent, which we  can increase based on our requirement. After the CSV is imported we can run the PST collector and migrator task which will start importing the psts to the associated EV accounts.

Also there is a file dashboard which will always help us to  check the current migration status.

Untitled

 

Very important – select the override password for password protected PST files in the personal store management. This will also migrate the password protected PST files. This option looks amazing.

untitled17

Tips :

  1. Make sure the EV service account is used to run the collector and migrator task.
  2. Make sure the EV service account has full access to the PST holding, collecting and Migrating shared drives. If this is not present the import, collection and migration will fail.
  3. Better not to perform any failovers of the node when the large import operation is happening.
  4. There is PST collector and PST migrator logs generated whenever this task runs and is located in the EV provising task location. This will give more information when any issues or road blocks in the migration
  5. If any of the provided PST files are password protected then they will not be migrated, unless we specify the override password protected files in the personal store management.
  6. Make sure you have enough sufficient free disk space in the PST collector and PST Migrator location.

Thanks & Regards
Sathish Veerapandian

Enable Azure DDOS Protection and its features

$
0
0

In Azure we can enable the DDOS protection easily in few clicks for our applications running and deployed in Azure Virtual networks.

Using this we can protect the resources in a virtual network and its published end points including public IP address. When it is integrated with application gateway web application firewall, DDOS protection standard can provide full layer 3 to 7 protection.

There are 2 types of service Tier:

Basic-

The basic protection is enabled by default.This provides protection against common network layer attacks through Always on traffic monitoring and real time mitigation.

Basic.png

Standard-

Standard protection is a paid premium service. This has a dedicated monitoring,machine learning and configures DDOS protection to this virtual network. So when enabled applications traffic patterns are enabled and by this it will be able to detect the malicious traffic in a smart way. We can switch between any one of these option in our virtual networks in few clicks.

DDOS9

And then we can click on the standard plan.

DDOS10

This also  provides attack telemetry views through Azure Monitor, enabling alerting when your application is under attack. Integrated Layer 7 application protection can be provided by Application Gateway WAF.

This also provides views of attack in Azure Monitor, Alerting can be enabled when application is under attack. Also Layer 7 application protection can be done by integrating with Azure Web Application Firewall (WAF).

This Standard feature is integrated with Virtual networks and will provide protection for Azure application service end points from DDOS attacks. IT also has alerting, telemetry features which is not present in the basic DDOS protection plan which comes at free of cost.

First we need to create a DDOS protection plan if we need to use the standard feature.

Navigate to Azure Portal – Click on Create DDOS protection Plan

DDOS2

Type Name – Choose Subscription – Select resource Group and choose the location.

DDOS3

Once it is done the deployment will be successful

DDOS5

We have automation option during this deployment

DDOS18

After its deployed when we go to the  DDOS resource we can see the below options in them.

Activity Log – 

This is more of like a Audit log which explains on modifying the resources in the subscription.
There are also few options which tells us about the status of the operation and other properties. But this logs will not have any get operations happening in the resources.

There is an option to filter per resource- resource type and operation.

DDOS19

we have an option to filter them via category , severity and initiated by

DDOS20

Access Control(IAM)-

we can view who has access to the resource and add  new access to the resource and also remove them.
DDOS21

Tags- 

This approach is helpful when we need to organize our resources for billing or management. Tags can be applied to resource groups or resources directly
This retrieves all the resources in our subscription with that tag name and value. Usually helpful in tracking for billing purposes.

Tags1

Tags support only resources deployed through resource manager and does not support resources deployed through classic model.

By default the resource group will not have tags assigned to them. We can assign to to them by running below command.

Tags

Locks – 

Management locks helps us prevent accidental deletion or modification of our Azure resources. we can manage these locks from within the Azure portal.

locks

As an administrator, we might need to lock a subscription, resource group, or resource to prevent other users in your organization from accidentally deleting or modifying critical resources.

There are 2 types of lock levels-

Delete(CanNotDelete) –
Authorized users would be able to read and modify a resource, but they will not be able to delete any resources.

ReadOnly-
Users can only read but they will not be able to modify and delete any resources.

locks1

Metrics – 

Allows us to monitor the health, performance, availability and usage of our services.

metrics

Thanks & Regards
Sathish Veerapandian

Storage Explorer in Azure portal and its options

$
0
0

The Storage explorer desktop tool is available now in the azure storage accounts section in azure portal.

blob1

 

From here we have options to manage,create Blob Containers, File shares and queues

New Blob Containers can be created deleted managed –

 

blob6

Further we can upload and delete blobs

blob9

we can further drill down and manage properties

10

These are the options variable in the properties

11

Same way the file-share can be created deleted and managed

Also we have an option to upload files, connect to VM and download from here.

blob7

The Storage Queues also can be created and managed

There is option to add message,de queue and clear the queue,.

blob8

Below is the small summary on azure storage accounts blobs, file shares, and queues.

What is Azure Blob Storage?

Azure blob storage is Microsoft objects storage solution.
This storage type is enhanced to store large amount of unstructured data like text or binary.
The items stored on blob storage can be accessed from anywhere in the world via http/https. This can be invoked through azure functions (cli,powershell,etc..,) and libraries are available for multiple languages.

Once created they have a service end point like below.This will be the connection string that can be used in our API’s to access the data in the azure storage account.

blob91.png

There are 3 types of blobs-

Block Blobs – Can be used to store data of types text and binary.It supports data to store up to 4.7 TB. They store data in blocks type and these data can be managed individually.

Append Blobs – They are similar like block blocks except they are enhanced for append operations. This is best suited for recurring tasks operations example like logging data from virtual machines.

Page Blobs – The data are stored and accessed randomly in page blocks and data can be stored up to 8 TB in size.

So the blobs are stored in below order

Storage Account – Containers – Blobs

A storage account can hold multiple containers and a containers in turn can hold unlimited blobs in them.

What is Azure File Storage?
This is a service from azure through which we can create a fileshare in the azure cloud using the standard Server message block (SMB) protocol. This option will be really useful for migrating local fileshares to azure fastly with very minimal cost.

Once the file storage is created we will have the connection string like below

We can use them to connect to either to windows or linux.

blob92.png

The connection string will have the username and password also.

blob93

Since its a SMB it uses port 445, so make sure the port 445 is opened in your local network firewall.We will not be able to connect if port 445 is not allowed from your local network.

What is Azure Storage Queue Service?

This is a service offered by azure where we can store large volumes of messages and they can be accessed from anywhere in the world via http/https. A single message can go up to 64 KB in size. Using this we can provide persistent messaging within and between services. Using this we can store unlimited messages even in same queue.

Once created we will get the end point like below.REST-based operation  can be initiated  for GET/PUT/PEEK operations.

blob94

 

 

 

Error – loading Microsoft Teams Modern authentication failed here status code caa20004

$
0
0

After enabling Microsoft Teams in a federated setup with ADFS ,we might get this error when on premise users try to login to Microsoft Teams for the first time.

WhatsApp Image 2018-05-30 at 21.05.12

Even on the client logs in the below location we can see the below message-

C:\Users\username\AppData\Roaming\Microsoft\Teams

Wed May 30 2018 06:51:54 GMT+0400 (Arabian Standard Time) <7092> — warning — SSO: ssoerr – (status) Unable to get errCode. Err:Error: ADAL error: 0xCAA10001SSO: ssoerr – (status) Unable to get errorDesc. Err:Error: ADAL error: 0xCAA10001

Wed May 30 2018 06:51:54 GMT+0400 (Arabian Standard Time) <7092> — event — Microsoft_ADAL_api_id: 13, Microsoft_ADAL_correlationId: 2c46e41d-ef75-49ed-b277-cfd61427b273, Microsoft_ADAL_response_rtime: 2, Microsoft_ADAL_api_error_code: caa10001,

There is also Get logs  option that can be opened with the below option  when this issue occurred from the Teams icon as shown below –

Untitled

When the issue occurs we would be able to see the error message regarding  unable to get  ADAL access token in the get logs.

Untitled2

In the below example since its a successful login it shows as success after getting the access token.

Untitled3

There is an option to download MS-Teams Diagnostics logs as well by using the below key combination and here we go we get the Ms Teams Diagnostics logs

Ctrl + Shift + Alt + 1

12

 

while looking through this diagnostics logs it has lot of info like client version, computer name, memory , user ID and we can look only for an information that we are  currently facing, since understanding this logs  would be  really difficult.

Untitled4

Below is an example of getting successful access token.

Untitled5

 

Any Azure AD dependent apps like Microsoft teams they will have an optimized path for the first time login process to login with WS-Trust kerberos authentication endpoints of ADFS.If the above first attempt is not successful then the client will try to perform an interactive login session which is presented as web browser dialog.

But the new office and ADAL clients will first try only WS-Trust 1.3 version of the endpoint for windows integrated authentication which is not enabled by default.

Solution:

Enable WS-Trust 1.3 for Desktop Client SSO on the onprem ADFS server which has a federated setup with Azure AD tenant by running the below command.

Enable-AdfsEndpoint -TargetAddressPath “/adfs/services/trust/13/windowstransport”

We also want to ensure that we have both Forms and Windows Authentication (WIA) enabled in our global authentication policies.

Untitled5

Email Security – Enable Sand Boxing ATP on Cisco Iron Port

$
0
0

Cisco Advanced malware protection uses Cisco Threat Intelligence Extensive latest threats and security trends Knowledge base Analytics and behavioral indicators which will help us to defend in latest spear phishing  and malware attacks.

This will basically fall under  advanced threat capability  category which is capable of providing additional layer of security.These ATP have retrospective detection alerts which is capable of tracking malware alerts which was successful through initial defenses.

AMP is the recent name given to this advanced threat detection by most of the security systems  where it has following:

  1. A separate private isolated environment where it has Implementations for multiple attack vectors/entry points (firewall, network, endpoint, email.
  2. Ransomware/Malware Threat prevention.
  3. Retrospective alerting and remediation techniques.

Usually AMP works in the following fashion for any email security system :

Preventive Measure – Strengthens the defense mechanism by having upto date latest malware attacks and defense mechanism from respective real time threat intelligence service.
Ironport uses Talos Engine – https://www.talosintelligence.com/
Using this technique the malicious content will be blocked.

Threat Analysis in Transit of Emails – During this process the file is analyzed as an end user PC(windows/MAC) in a isolated network to detect malware, experience file behavior and mark threat level if at all detected. If the sand boxing is not enabled in local on premise them it captures the fingerprint of each file which hits the gateway and will send them to their AMP cloud based intelligence network. Here we have an option to select which types of files that needs to be analysed via this AMP in most of the gateways.

Tracking after Delivery- In this step it uses continuous analysis which will help to identify if there are any malicious file which are capable of performing any malware attacks after certain period of time. By using this AMP will be able to find the infected source and then alert the admin and visibility till the infected file.

In this article we will have how to enable AMP in cisco ironport.

Login to the  appliance –  Navigate to security services – Advanced Malware protection – Select File reputation and analysis.

ip1

If its enabled we will be getting the below screen. To further fine tune the settings click on edit global settings

ip2

Click on – Enable file reputation.

ip3

This is used to protect against zero-day and targeted file-based threats.

Following actions are performed After a file’s reputation is evaluated:
• If the file is known to the file reputation service and is determined to be clean, the file is released to the end user.
• If the file reputation service returns a verdict of malicious, then the appliance applies the action that we have specified for such files.

We have Enable File Analysis-

This needs to be enabled. We have almost for all the attachment types.

ip4

ip5

ip6

File Analysis works in coordination with File reputation filtering. When this option is enabled attachments in emails will be sent to file analysis. Here we have the option to choose the file types which we need to perform the analysis. Be very choosy in this section keep in mind that since there is analysis enabled on this file it will take little few minutes to deliver the mail to end user when compared to a user who does not have AMP enabled for their account.

If the file is sent for analysis TO SANDBOXING (cloud or onprem based on setup):
• If the Selected file type is sent to the cloud for analysis: Files are sent over HTTPS.
Also the appliance generates an identifier for each file using a Secure
Hash Algorithm (SHA-256)
•Usually Analysis normally takes minutes, but may take longer based on the size and file type.
• Results for files analyzed using an on premises Cisco AMP Threat Grid appliance are cached locally

Advanced settings for file reputation –  Here we need to select our Sand boxing environment based on our configuration. If we are using cloud AMP then we have 4 regions to select based on our requirement.

ip7

There is an option  to register appliance with AMP for endpoints.Make sure you have a user account in AMP for Endpoints console with admin access rights. For more details on how to create an AMP for Endpoints console user account, contact Cisco TAC.

ip71

If we have local on premise AMP setup then we need to select option private reputation cloud and add the required details.

ip8

We have the same option cloud or on prem for file analysis

If specifying the cisco cloud server, choose the server that is physically nearest to your
appliance. Newly available servers will be added to this list periodically using standard
update processes

ip9

If we choose our own private cloud then we need to  use the self signed cert or  upload one certificate.This is required for encrypted communications between this appliance and yourprivate cloud appliance. This must be the same certificate used by the private cloudserver. I prefer to have one SHA256,2048 bit certificate generated from internal CA and apply them on the private cloud as well as the appliance for this connection alone.

Untitled

This settings is optional which we can leave as it is or if you want to configure the cache expiry period for File Reputation disposition values.

ip10

Once enabled the files enabled in AMP will be passed to them after antivirus engine.

We can see the files blocked in the AMP in the incoming mail dashboard.

Untitled1

Imp Notes:

  1. An AMP subscription is required to enable this functionality.
  2. Advanced Malware Protection services require network communication to the cloud servers on port 443 (for File Reputation) and 443 (for File Analysis). If there is no communication  the file types enabled for AMP will be sent to quarantine folder even if they are clean. Below error message will be received if no communication is present to cloud server in incoming  email header.

Untitled

Thanks & Regards
Sathish Veerapandian


Microsoft Teams- Consult before transferring a call & HoloLens Remote assist

$
0
0

Calling in Teams is powered by Phone System (formerly known as Cloud PBX), the same service in Office 365 that enables PSTN calling capabilities in Skype for Business Online.

The Phone System feature set for Skype for Business is different from the Phone System feature set for Teams.Also With Direct Connect we can use our existing  PSTN Telephony system through an SBC . To connect the on premise SBC to Microsoft Teams a sip proxy is used to connect to  sip.pstnhub.microsoft.com.

Microsoft Teams have a new feature consult before transfer.

By using this option we can help the  wrong callers calling our extension to reach the right person.

This feature lets you quickly check in with another person via chat or audio call before transferring a call to them

Anyone with an Enterprise Voice license can do this, not just delegates! To try it, when you’re in a call, click More options (…) > Consult then transfer.

CBT

Call someone on a HoloLens –

Microsoft introduced the remote assist option for HoloLens users via Microsoft Teams.

Untitled

By using this option we can collaborate remotely to our Microsoft Teams Colleagues list. In remote assistance they can  perform reality annotations, we can show them  what we see , place arrows, draw lines and share images with our colleagues.

Prerequisites:

  1. This works from the Teams desktop app from Windows 10 PC.
  2. Need to have the remote assist app installed on the holo lens.

 

Enable DLP for outgoing emails in Cisco Iron Port

$
0
0

Data Loss Prevention prevents the sensitive organization’s proprietary information by detecting before transit through ex-filtration transmissions and continuously monitors them to protect all types of data loss. The organizational data leak mostly happens when the end users unintentionally emailing sensitive data from our network which leads to Data leak Incidents.
There are many ways to achieve this and in this article we will look into how to prevent the data loss with the options present in Cisco Iron Port Email Gateway Solution.

Basically in any DLP there will be two actions involved :

Data Match: Where the DLP application scans the email body, header and attachments for the sensitive content created based on the DLP policy rules.

Action: Once any emails are identified to be sensitive, based on the DLP policy where it was blocked action types can be drop,quarantine or deliver with disclaimer and notify an admin or manager or recipient based on the policy and document classification.

Below are the steps to enable DLP on Cisco Iron Port-

Login to Cisco Iron Port – Select security Services – Click on Data Loss Prevention

DLP

By Default this option will be enabled – but now we need to creation DLP policies and action types based on our requirement.

Better to enable Content logging which will appear in message tracking and better in troubleshooting.

DLP1

In this example we will run through the DLP wizard which will have few popular policies which are common. Adding custom policies are very much possible via cisco ironport and there are more options to add custom.

An Example of enable matched content logging when DLP is enabled. This will help Admins to debug and find the reason why the email was blocked.

DLP2

There are more common used cases and in our example we can choose PCI-DSS which is most sensitive and must be enabled  especially for the Finance teams.

DLP3

Here we have an option to enable the DLP reports

DLP4

Once done in the outgoing mail policies will be configured for PCI-DSS we created.

DLP6

And in this policy we can edit and choose the inbuilt DLP  dictionaries based on our requirement.

DLP7

There is an option to add custom also.

DLP8

In Mail Policies there is an option to apply only for few users sent or in the recipient list.

DLP9

Options to add attachments is present

DLP10

The Severity settings can be altered below

DLP12

The severity scale can be altered based on the policy and our requirement

DLP13

Custom classifier can be added

DLP14

In the classifier we have an option to choose templates from dictionary and entity

DLP15

DLP16

Once Done based on the policy and action DLP will be working for outgoing emails.

Imp Notes:

    1. Before implementing DLP in any environment it requires lot of study in multi phase , closely working with security team and implementing purely based on the document classification.
    2. Need to understand how the sensitive data is currently handled by all the teams, identify the current risks. Post analysis the required action plan of creating policy and action must be done.
    3. End user awareness session is very important to deal with DLP. Advising to use more secure channels in Enterprise File Share DRMS solutions only for dealinig with sensitive documents for finance teams can be advised.
    4. Any DLP policies we create must have Audit and notify manager which will create awareness on employees and easier for tracking.

Thanks & Regards
Sathish Veerapandian

Product Review – Stellar Mailbox Extractor for Exchange Server

$
0
0

Stellar Mailbox Extractor for Exchange Server – Product Review

Exchange administrators face a wide range of nightmarish scenarios through their working career. Handling corrupted databases, restoring files from backup, extracting data from an old employee’s computer are some examples of situations that every Exchange administrator wants to avoid because they are complex and time-consuming. Unfortunately, they end up facing these scenarios more often than they like.

But the good news is there are tools like Stellar Mailbox Extractor for Exchange Server that can make your job a lot easier.

What is Stellar Mailbox Extractor for Exchange Server?

This is a handy tool to have in your arsenal as it is designed to extract data from clean EDB files and to connect directly to the Exchange environment. It can also be used to mass export data from an existing environment to other formats like PST.

Features

Let’s look at some of its prominent features to get an idea of what it can do for Exchange administrators like us.

  • Converts a mailbox from EDB format to other formats such as PST, MSG, EML, HTML, RTF and PDF
  • You can convert multiple mailboxes
  • Gives you the option to search for a particular content in your mailbox. The filters are advanced and offer a ton of flexibility.
  • Converts archives mailboxes to PST
  • Compatible with many versions of Exchange Server.

These features have been tremendously helpful for many Exchange administrators.

Ideal situations

This tool is handy because it saves time and effort in many common situations and problems. Here are a few where this tool would prove to be invaluable for you.

Extract Mailboxes

This is the most perfect tool to extract mailboxes from EDB file to PST. As an administrator, this extraction task has been an integral part of my working life, and Stellar Mailbox Extractor for Exchange Server saves a lot of time for me. It also takes away the mundane side of the job.

One aspect I truly love is its user interface which is almost identical to the Mailbox Extractor tool. So, there is nothing much to learn or experiment here; everything is fairly straightforward.

Mass Exports

Another ideal situation for this tool is when you want to do mass exports from the existing environment to PST and other formats. Our organization often uses this tool for migration, where we export mailboxes from one Exchange environment to another.

Though these are some of the prominent uses of Stellar Mailbox Extractor for Exchange Server, you can end up using it in many other situations as well.

Installation and Use

A salient feature of this tool is its easy interface.

Extracting content from mailboxes and exporting them to other formats is extremely complicated. But this tool masks the complexity behind a simple and intuitive user-interface. As a result, you simply stare at good looking screens, oblivious of what’s happening in the background.

This way, you are not only spared of the complex processes, but this interface makes it highly usable for anyone. You don’t have to be an Exchange administrator with many years of experience and in-depth knowledge to use it. Even novices can use this tool comfortably.

With all that said, let us briefly see how we can use this tool.

When you double-click on the exe file, the installation wizard starts the process. There is really nothing much for you to do, as the wizard takes care of everything.

After installation, when you open the tool, you’re given two choices to start off. You can either open an offline EDB file or connect to Online Exchange.

iu

You can even view the folder structure of each mailbox and the contents within each folder on the left hand side pane. As you expand the tree structure, you can navigate your way.

You can right-click the folders at any time and you’ll be given a set of formats to which you can convert.

iu

These formats give you a ton of flexibility to view and migrate your data at any time.

Another cool aspect about this tool is that you can view individual mails, contacts, notes, attachments and pretty much everything else stored in your EDB files.

There are even search criteria that help you to zone in on the messages you want to see.

iu

The search feature is advanced and helps you to quickly find what you want. The available fields include:

  • To
  • From
  • CC
  • Subject
  • Text in the body or email
  • Attachment name or file extension
  • Date range

Once you find the content you want, you can convert it into any of the recognized formats. Simply choose your content and select ‘Save’ or ‘Right Click on tree item’ in any of the following formats.

  • PST
  • MSG
  • EML
  • RTF
  • HTML
  • PDF
  • Office 365

Alternately, you can export the contents to a Live Exchange Server directly. In fact, you’ll be able to connect to an individual mailbox or all mailboxes, depending on your scenario.

Overall, Stellar Mailbox Extractor for Exchange Server is a great tool that eases the work of Exchange administrators and this is why it makes sense to have this tool on hand always. Its simple interface and powerful capabilities are sure to make exporting data from Exchange mailboxes a breeze and hassle-free task.

This tool would definitely be a major help in environments where the backup solution and exchange have been decommissioned on older exchange databases, which was taken 10 years back.Due to critical legal requirements there might be a need to extract mailbox or particular email from an employee who was resigned 10 years back. All we need is the older backup tapes should have this edb file and we can bring them to clean shutdown even if they are not present in that state and extract users data with no hassle with this tool.

Steps to enable vault cache in Enterprise Vault

$
0
0

In this article we will have a look at enabling vault cache in veritas enterprise vault.

What is Vault Cache?

Vault cache is like a personal folder  or local copy  of their archived data which can be enabled and presented to end users in outlook. This can be  limited based on size of the file  and  can be enabled only for few users based on the requirement.

When this option is enabled it is provides  a local pst folder mapping  of users archived data to end user  through virtual vault. This wizard starts automatically once after we enable this option on server side and we need to run this setup only once from the end user side.

Follow the below steps to enable vault cache in enterprise vault:

Logon to Enterprise Vault – expand the policies container – navigate to  exchange  desktop policy  and select properties – click on Vault cache tab Click on enable – make vault cache available for users.

Once done we get the below warning to ensure that the cache location have enough space in addition to other vault operations that takes place from this location. Because enabling this option adds extra files in this location during the end user actions.

VaultCache1

We can check the vault cache location  and cache size in below location

Open vault admin console – navigate to enterprise vault – right click on ev server properties – click on cache tab- Make sure that you have added some extra space based on the number of users that we are going to assign this policy to end users

VaultCache2

Once enabled we have the following options

we can allow users to decide whether they can choose this option or leave them not to enable and access from EV store.

We have option to limit the archive download on GB. When this option is enabled  and it reaches the maximum level the oldest item is deleted and later the new items are copied over here.

In content strategy we have  3 options

Do not store any items in cache:

When this option is enabled only item headers are synchronized in the  vault cache and the content still remains in the vault store partition.

Store all items:

When this option is enabled it stores the item headers and the content from the server and maintains them as a local copy.

Store only items that user opens:

When this option is enabled it stores the local copy of headers and content of  only the items users retrieves from the client.

The rest below features that we see are the outlook client options that we can control on the end users based on our requirement.

VaultCache3

There are few more features in the advanced tab which helps admins to determine the vault cache settings and provide them based on the requirement.

On switching to advanced tab – list settings from vault cache

We can specify the download age limit. Default vault is 0. This helps  admins to control the size of the download cache to the clients.

VaultCache4

We have an option to control the download age limit from server side and not providing this option to end user to make decision.

VaultCache5

There are few more options which can be modified based on our requirement.

The most important thing is that we have an option to enable this feature for the delegated archive. We have option to enable all type of archives, default archive only the user mailbox and all mailbox and shared archive. This setting is mandatory required if we need to enable Virtual Vault for archives other than a user’s default archive.

VaultCache8

Below are the advantages of virtual vault:

  1. User will be to access archived items when offline  even when they are away from internet connection.
  2. Users can perform parallel multiple retrievals at the same time and will come from local content directly.
  3. Virtual Vault looks exactly like a mailbox or mapped PST file. This makes the users comfortable on opening the archived items directly and not from shortcuts. They can drag and drop items to and from mailbox to virtual vault.

Points to consider:

  1. When we enable this option and disable for end users no new vault archives will be enabled for users, however the old downloaded archives will still be present.
  2. A Vault Cache is a local copy of a user’s Enterprise Vault archive and stored on the users local computer.  As a best practices its preferred not to store content locally due to security reasons. However enabling this provides improved search and items retrieval for end users. So an  Encryption – either at the folder or the drive level is recommend.
  3. After enabling this option on server side its recommended to limit the data on the machine side and not to download all the content since it requires disk space on the client.

Thanks & Regards 
Sathish Veerapandian

Bulk Import local PST files to Office 365 mailboxes

$
0
0

In this article we will look at the steps to bulk import PST files to office 365 mailboxes.
There might be a scenario when a switch over from on premise to office 365 is done users might have maintained local PST files in network drive without an archive solution which is a bad practice.
When we run into these kind of scenarios its definitely not recommended to maintain this data in this approach.We might have bunch of pst files or probably more which might be 10 years worth of email that needs to be imported to the associated mailboxes.

There are 2 options to perform this action

Method 1: Use the free Azure service to upload the .PST files and map to the users mailbox.

Below prerequisites needs to be done:

1)So as a initial prerequisite move all the pst files to one central location which will be easier to perform the bulk import. If you have them in different sites then better to create one central location per site.

2)If we have more number of PST files and the data is more then create multiple jobs which will be better for tracking and not to choke the bandwidth and throttling.

3) The administrator will require a mailbox import\export rights to perform this operation.

Step 1:  Assign RBAC Mailbox import Export role to the required account. This can be done via power shell to connecting remote session to office 365 account or via Exchange admin console in office 365.

Untitled

Untitled1

 

Office356

 

Once permission is granted navigate to data migration option setup  in the admin  page in office 365 admin URL – Here we need to select the option upload PST files.

2

Now Upload PST files go to New Import Job and type the Job name >> Next. Then check on Upload Your Data or hit on Next.

3

Now an import job window will appear. Here we need to click on Show network upload SAS URL and copy the URL by clicking Copy to clipboard. After that download Azure AzCopy for download the AzCopy tool and install the application.

4

Click on Azure AzCopy software and type the given command.

AzCopy.exe /Source:\\network path /Dest:”SAS URL” /V: give location  to save log file \AzCopy.log /Y

5

Note: We need to give the Sharing Permission for our file or folder where the PST file is present.

Navigate  to the import data window and check on the both preparing the mapping file’s option and click on Next.

6

Now in this import data we need to create the pst mapping and user in the excel file.

7

And upload the file by clicking the Select mapping file option

8

 

9

Once done we can see the pst files have been successfully imported to the associated office 365 mailboxes.

Method 2: Use a third-party solution for migrating PST to O365 Cloud Platform

Sometime we need  solution to import specific items from bunch PST file data into Office 365. So here we are going to discuss one more method which is a third-party tool for migrating PST to Exchange Online Mailbox.

I happened to have a look at this  MailsDaddy PST to Office 365 Migration Tool and it provides  security and easiness to Import all PST file data like emails, contacts, calendars, appointments, and attachments etc into o365 .

The tool carries advantages like:

Export the selected items only: It will show all the preview of PST file data and you can select the items and migrate them to O365 account. This is very much useful where in terms of an organization have restored a large mailbox from the old backup tapes for a legal issue. Here the exported huge PST from the backup can be taken and only the required important emails can be selected and imported to the user mailbox in online.

Date Range Filter: With the data filter option you can search the emails between the specific times and import only required data from PST file to Exchange online Mailbox. This option is also useful for cases where end user requires a restore of missing emails or a resigned employee from an old data from the backup and extract data only for last 1 year and importing them to the associated  office 365 mailbox.

Impersonation Option: Using this option, we will be able to migrate multiple mailboxes using sharing throttling and connection limits of each users. To use the impersonation export option, users must have application impression rights and full access to the admin account.

Bulk export Option: With this option, we can export multiple PST files into multiple mailboxes by mapping all mailboxes using a CSV file.

Below are the steps to use the Mails Daddy PST import tool :

Step 1: We can download this application and install it.

Step 2: Once it installed launch this software. After that click on Add file to upload the PST file.

Step 3: Once we click on upload the PST the software it will show all the preview of the PST file data.

10

Now we can select the mail, contacts, calendars, appointments, and attachments etc. if we need to export only selected items.
Click Export button to import all data from PST into Office 365.
Now select the provide export option and put the Office 365 Mailbox ID and password>> click Export.

Here we have 3 options export  all folders , export selected folders, export to primary mailbox and export to archive mailbox.

11

Once the export is clicked the  selected emails will be imported to  associated office 365 mailboxes successfully.

Viewing all 199 articles
Browse latest View live