Thursday, November 24, 2011

Linux Dump Command to move data to another hard drive (dump, restore, backup) !

dump Linux Commands

Explanation

dump COMMAND:
     dump command makes backup of filesystem or file and directories.

SYNTAX:
  The Syntax is
     dump [options] [dump-file] [File-system or file or directories].

OPTIONS:
    
-[level]     The dump level any integer
-f     Make the backup in a specified file
-u     Updates /etc/dumpdats file for the backup made
-v     Displays Verbose Information
-e     Exclude inode while making backup

EXAMPLE:
    
    To make a backup for a directory or file :

    dump -0uf databackup /home/user1/data

    This command creates a dump-file called databackup which is the backup of /home/user1/data directory.
    In above command:
    -0    -Is the dump-level [0 specifies full-backup]
    databackup    -Is a dump-file [or backup-file]
    /home/user1/data    -Is a directory for which a backup is created
    To make a backup for a directory or file which is already backedup with dump level 0:

    dump -1uf databackup /home/user1/data

    This command backups all the new files added to /home/user1/data directory after level-0 dump is made.
    -1    -Is the dump-level [1 specifies incremental backup]
    databackup    -Is a dump-file [or backup-file]
    /home/user1/data    -Is a directory for which a backup is created

Move data  to another hard drive (dump, restore, backup)
        There are several methods to move running Linux to another hard drive at the same server. But I used Unix dump/restore utility to perform this…
First of all it’s necessary to partition new hard drive in the same way as it’s done with old drive (Linux is running at). I usually use ‘fdisk’ utility. Let’s assume that old drive is /dev/hda and new one is /dev/hdb. To view hda’s partition table please run ‘fdisk -l /dev/hda’ which should show something like this:
Disk /dev/hda: 60.0 GB, 60022480896 bytes
255 heads, 63 sectors/track, 7297 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 15 120456 83 Linux
/dev/hda2 16 276 2096482+ 82 Linux swap
/dev/hda3 277 7297 56396182+ 83 Linux
After this run ‘fdisk /dev/hdb’ and make the same partitions at it. Interactive mode of fdisk utility is well documented and is very intuitive, so I don’t think it would be difficult to perform partitioning.
After this is done, we should make new filesystems at partitions we’ve created:
mkfs -t ext3 /dev/hdb1
mkfs -t ext3 /dev/hdb3
mkswap /dev/hdb2
When it’s done it’s NECESSARY to mark newly created filesystems as it’s done with old ones. To check filesystem volume name run command ‘tune2fs -l /dev/hda1 | grep volume’ and etc. You’ll see something like this:
Filesystem volume name: /boot
It means that we should mark new hdb1 with label /boot. It can be done by command:
tune2fs -L “/boot” /dev/hdb1
The same should be performed for all partitions except swap one. In my case I should label hdb3 by command:
tune2fs -L “/” /dev/hdb3
At this point new hard drive preparation is finished and we can proceed with moving Linux to it. Mount new filesystem and change directory to it:
mount /dev/hdb1 /mnt/hdb1
cd /mnt/hdb1
When it’s done we can perform moving by command:
dump -0uan -f – /boot | restore -r -f -
And the same with / partition:
mount /dev/hdb3 /mnt/hdb3
cd /mnt/hdb3
dump -0uan -f – / | restore -r -f -
When dump/restore procedures are done we should install boot loader to new HDD. Run ‘grub’ utility and execute in it’s console:
root (hd1, 0)
setup (hd1)
quit
In case everything is done carefully and right (I’ve tested this method by myself) you can boot from new hard drive and have ‘old’ Linux running at new hard drive running.

Read more: http://www. linuxscrew. com/2007/08/13/move-linux-to-another-hard-drive-dump-restore-backup/#ixzz1ec9eDb3z


Friday, October 28, 2011

Configuring DSCP on SPARC Enterprise(XSCF)

Configuring DSCP

The Sun SPARC Enterprise Server Administration Guide explains how to set up DSCP, but it is really quite simple. The easiest method is using the syntax:
    setdscp -i NETWORK -m NETMASK 
Choose a network address (be sure to pick a subnet that is not in use at your facility) and the corresponding netmask, and setdscp will do the rest. For example, in my lab the subnet 192.168.244.0 is unused, so I do:
    XSCF> setdscp -i 192.168.224.0 -m 255.255.255.0 
There are other ways to set up the DSCP network addresses, but this is really the best approach.

setdscp will assign an IP address to the SP, and reserve one IP address for every possible domain (the M9000-64 supports 24 domains, so a maximum of 25 IP addresses are reserved). A common question that's asked is, if you're running PPP between the SP and each domain, don't you need to two addresses for each domain, one for the domain and one for the SP? No, not really. Since routing is done based on the destination address, we can get away with using the same IP address for the SP on every PPP link. So technically speaking, the NETWORK and NETMASK are not defining a DSCP subnet; they are defining a range of IP addresses from which DSCP selects endpoint addresses. A subtle difference, but still a difference.

On the SP, showdscp will display the IP addresses assigned to each domain and the SP, for example:

    XSCF> showdscp      DSCP Configuration:      Network: 192.168.224.0     Netmask: 255.255.255.0       Location     Address     ----------   ---------     XSCF         192.168.224.1     Domain #00   192.168.224.2     Domain #01   192.168.224.3     Domain #02   192.168.224.4     Domain #03   192.168.224.5 
In Solaris, the prtdscp(1M) command will display the IP address of that domain and the SP (prtdscp is located in /usr/platform/SUNW,SPARC-Enterprise/sbin). You can get the same basic information from ifconfig sppp0:
    % /usr/platform/SUNW,SPARC-Enterprise/sbin/prtdscp     Domain Address: 192.168.224.2     SP Address: 192.168.224.1      % ifconfig sppp0     sppp0: flags=10010008d1 mtu 1500 index 3             inet 192.168.224.2 --> 192.168.224.1 netmask ffffff00 

Friday, October 7, 2011

ZFS - Pool, Filesystem, RAID, Snapshots, Clones and Troubleshooting ZFS

[ZFS - Pool, Filesystem, RAID, Snapshots, Clones and Troubleshooting ZFS]

ZFS Management

ZFS was first publicly released in the 6/2006 distribution of Solaris 10. Previous versions of Solaris 10 did not include ZFS.

ZFS is flexible, scalable and reliable. It is a POSIX-compliant filesystem with several important features:

  • integrated storage pool management
  • data protection and consistency, including RAID
  • integrated management for mounts and NFS sharing
  • scrubbing and data integrity protection
  • snapshots and clones
  • advanced backup and restore features
  • excellent scalability
  • built-in compression
  • maintenance and troubleshooting capabilities
  • automatic sharing of disk space and I/O bandwidth across disk devices in a pool
  • endian neutrality

No separate filesystem creation step is required. The mount of the filesystem is automatic and does not require vfstab maintenance. Mounts are controlled via the mountpoint attribute of each file system.

Pool Management

Members of a storage pool may either be hard drives or slices of at least 128MB in size.

To create a mirrored pool:
zpool create -f pool-name mirror c#t#d# c#t#d#
To check a pool's status, run:
zpool status -v pool-name
To list existing pools:
zpool list
To remove a pool and free its resources:
zpool destroy pool-name
A destroyed pool can sometimes be recovered as follows:
zpool import -D

Additional disks can be added to an existing pool. When this happens in a mirrored or RAID Z pool, the ZFS is resilvered to redistribute the data. To add storage to an existing mirrored pool:
zpool add -f pool-name mirror c#t#d# c#t#d#

Pools can be exported and imported to transfer them between hosts.
zpool export pool-name
zpool import pool-name
Without a specified pool, the import command lists available pools. zpool import

To clear a pool's error count, run:
zpool clear pool-name

Although virtual volumes (such as those from DiskSuite or VxVM) can be used as base devices, it is not recommended for performance reasons.

Filesystem Management

Similar filesystems should be grouped together in hierarchies to make management easier. Naming schemes should be thought out as well to make it easier to group administrative commands for similarly managed filesystems.

When a new pool is created, a new filesystem is mounted at /pool-name.

To create another filesystem:
zfs create pool-name/fs-name
To delete a filesystem:
zfs destroy filesystem-name

To rename a ZFS filesystem:
zfs rename old-name new-name

Properties are set via the zfs set command.
To turn on compression:
zfs set compression=on pool-name/filesystem-name
To share the filesystem via NFS:
zfs set sharenfs=on pool-name/fs-name
zfs set sharenfs="mount-options " pool-name/fs-name
Rather than editing the /etc/vfstab:
zfs set mountpoint= mountpoint-name pool-name/filesystem-name

Quotas are also set via the same command:
zfs set quota=#gigG pool-name/filesystem-name

RAID Levels

ZFS filesystems automatically stripe across all top-level disk devices. (Mirrors and RAID-Z devices are considered to be top-level devices.) It is not recommended that RAID types be mixed in a pool. (zpool tries to prevent this, but it can be forced with the -f flag.)

The following RAID levels are supported:

  • RAID-0 (striping)
  • RAID-1 (mirror)
  • RAID-Z (similar to RAID 5, but with variable-width stripes to avoid the RAID 5 write hole)
  • RAID-Z2

The zfs man page recommends 3-9 disks for RAID-Z pools.

Performance Monitoring

ZFS performance management is handled differently than with older generation file systems. In ZFS, I/Os are scheduled similarly to how jobs are scheduled on CPUs. The ZFS I/O scheduler tracks a priority and a deadline for each I/O. Within each deadline group, the I/Os are scheduled in order of logical block address.

Writes are assigned lower priorities than reads, which can help to avoid traffic jams where reads are unable to be serviced because they are queued behind writes. (If a read is issued for a write that is still underway, the read will be executed against the in-memory image and will not hit the hard drive.)

In addition to scheduling, ZFS attempts to intelligently prefetch information into memory. The algorithm tries to pick information that is likely to be needed. Any forward or backward linear access patterns are picked up and used to perform the prefetch.

The zpool iostat command can monitor performance on ZFS objects:

  • USED CAPACITY: Data currently stored
  • AVAILABLE CAPACITY: Space available
  • READ OPERATIONS: Number of operations
  • WRITE OPERATIONS: Number of operations
  • READ BANDWIDTH: Bandwidth of all read operations
  • WRITE BANDWIDTH: Bandwidth of all write operations

The health of an object can be monitored with
zpool status

Snapshots and Clones

To create a snapshot:
zfs snapshot pool-name/filesystem-name@ snapshot-name
To clone a snapshot:
zfs clone snapshot-name filesystem-name
To roll back to a snapshot:
zfs rollback pool-name/filesystem-name@snapshot-name

zfs send and zfs receive allow clones of filesystems to be sent to a development environment.

The difference between a snapshot and a clone is that a clone is a writable, mountable copy of the file system. This capability allows us to store multiple copies of mostly-shared data in a very space-efficient way.

Each snapshot is accessible through the .zfs/snapshot in the /pool-name directory. This can allow end users to recover their files without system administrator intervention.

Zones

If the filesystem is created in the global zone and added to the local zone via zonecfg, it may be assigned to more than one zone unless the mountpoint is set to legacy.
zfs set mountpoint=legacy pool-name/filesystem-name

To import a ZFS filesystem within a zone:
zonecfg -z zone-name

add fs
set dir=mount-point
set special=pool-name/filesystem-name
set type=zfs
end
verify
commit
exit

Administrative rights for a filesystem can be granted to a local zone:
zonecfg -z zone-name

add dataset
set name=pool-name/filesystem-name
end
commit exit

Data Protection

ZFS is a transactional file system. Data consistency is protected via Copy-On-Write (COW). For each write request, a copy is made of the specified block. All changes are made to the copy. When the write is complete, all pointers are changed to point to the new block.

Checksums are used to validate data during reads and writes. The checksum algorithm is user-selectable. Checksumming and data recovery is done at a filesystem level; it is not visible to applications. If a block becomes corrupted on a pool protected by mirroring or RAID, ZFS will identify the correct data value and fix the corrupted value.

Raid protections are also part of ZFS.

Scrubbing is an additional type of data protection available on ZFS. This is a mechanism that performs regular validation of all data. Manual scrubbing can be performed by:
zpool scrub pool-name
The results can be viewed via:
zpool status
Any issues should be cleared with:
zpool clear pool-name

The scrubbing operation walks through the pool metadata to read each copy of each block. Each copy is validated against its checksum and corrected if it has become corrupted.

Hardware Maintenance

To replace a hard drive with another device, run:
zpool replace pool-name old-disk new-disk

To offline a failing drive, run:
zpool offline pool-name disk-name
(A -t flag allows the disk to come back online after a reboot.)

Once the drive has been physically replaced, run the replace command against the device:
zpool replace pool-name device-name
After an offlined drive has been replaced, it can be brought back online:
zpool online pool-name disk-name

Firmware upgrades may cause the disk device ID to change. ZFS should be able to update the device ID automatically, assuming that the disk was not physically moved during the update. If necessary, the pool can be exported and re-imported to update the device IDs.

Troubleshooting ZFS

The three categories of errors experienced by ZFS are:

  • missing devices: Missing devices placed in a "faulted" state.
  • damaged devices: Caused by things like transient errors from the disk or controller, driver bugs or accidental overwrites (usually on misconfigured devices).
  • data corruption: Data damage to top-level devices; usually requires a restore. Since ZFS is transactional, this only happens as a result of driver bugs, hardware failure or filesystem misconfiguration.

It is important to check for all three categories of errors. One type of problem is often connected to a problem from a different family. Fixing a single problem is usually not sufficient.

Data integrity can be checked by running a manual scrubbing:
zpool scrub pool-name
zpool status -v pool-name
checks the status after the scrubbing is complete.

The status command also reports on recovery suggestions for any errors it finds. These are reported in the action section. To diagnose a problem, use the output of the status command and the fmd messages in /var/adm/messages.

The config section of the status section reports the state of each device. The state can be:

  • ONLINE: Normal
  • FAULTED: Missing, damaged, or mis-seated device
  • DEGRADED: Device being resilvered
  • UNAVAILABLE: Device cannot be opened
  • OFFLINE: Administrative action

The status command also reports READ, WRITE or CHKSUM errors.

To check if any problem pools exist, use
zpool status -x
This command only reports problem pools.

If a ZFS configuration becomes damaged, it can be fixed by running export and import.

Devices can fail for any of several reasons:

  • "Bit rot:" Corruption caused by random environmental effects.
  • Misdirected Reads/Writes: Firmware or hardware faults cause reads or writes to be addressed to the wrong part of the disk.
  • Administrative Error
  • Intermittent, Sporadic or Temporary Outages: Caused by flaky hardware or administrator error.
  • Device Offline: Usually caused by administrative action.

Once the problems have been fixed, transient errors should be cleared:
zpool clear pool-name

In the event of a panic-reboot loop caused by a ZFS software bug, the system can be instructed to boot without the ZFS filesystems:
boot -m milestone=none
When the system is up, remount / as rw and remove the file /etc/zfs/zpool.cache. The remainder of the boot can proceed with the
svcadm milestone all command. At that point import the good pools. The damaged pools may need to be re-initialized.

Scalability

The filesystem is 128-bit. 256 quadrillion zetabytes of information is addressable. Directories can have up to 256 trillion entries. No limit exists on the number of filesystems or files within a filesystem.

ZFS Recommendations

Because ZFS uses kernel addressable memory, we need to make sure to allow enough system resources to take advantage of its capabilities. We should run on a system with a 64-bit kernel, at least 1GB of physical memory, and adequate swap space.

While slices are supported for creating storage pools, their performance will not be adequate for production uses.

Mirrored configurations should be set up across multiple controllers where possible to maximize performance and redundancy.

Scrubbing should be scheduled on a regular basis to identify problems before they become serious.

When latency or other requirements are important, it makes sense to separate them onto different pools with distinct hard drives. For example, database log files should be on separate pools from the data files.

Root pools are not yet supported in the Solaris 10 6/2006 release, though they are anticipated in a future release. When they are used, it is best to put them on separate pools from the other filesystems.

On filesystems with many file creations and deletions, utilization should be kept under 80% to protect performance.

The recordsize parameter can be tuned on ZFS filesystems. When it is changed, it only affects new files. zfs set recordsize=size tuning can help where large files (like database files) are accessed via small, random reads and writes. The default is 128KB; it can be set to any power of two between 512B and 128KB. Where the database uses a fixed block or record size, the recordsize should be set to match. This should only be done for the filesystems actually containing heavily-used database files.

In general, recordsize should be reduced when iostat regularly shows a throughput near the maximum for the I/O channel. As with any tuning, make a minimal change to a working system, monitor it for long enough to understand the impact of the change, and repeat the process if the improvement was not good enough or reverse it if the effects were bad.

The ZFS Evil Tuning Guide contains a number of tuning methods that may or may not be appropriate to a particular installation. As the document suggests, these tuning mechanisms will have to be used carefully, since they are not appropriate to all installations.

For example, the Evil Tuning Guide provides instructions for:
  • Turning off file system checksums to reduce CPU usage. This is done on a per-file system basis:
    zfs set checksum=off filesystem

  • zfs set checksum='on | fletcher2 | fletcher4 | sha256' filesystem
  • Limiting the ARC size by setting
    set zfs:zfs_arc_max
    in /etc/system on 8/07 and later.
  • If the I/O includes multiple small reads, the file prefetch can be turned off by setting
    zfs:zfs_prefetch_disable
    on 8/07 and later.
  • If the I/O channel becomes saturated, the device level prefetch can be turned off with
    set zfs:zfs_vdev_cache_bshift = 13
    in /etc/system for 8/07 and later
  • I/O concurrency can be tuned by setting
    set zfs:zfs_vdev_max_pending = 10
    in /etc/system in 8/07 and later.
  • If storage with an NVRAM cache is used, cache flushes may be disabled with
    set zfs:zfs_nocacheflush = 1
    in /etc/system for 11/06 and later.
  • ZIL intent logging can be disabled. (WARNING: Don't do this.)
  • Metadata compression can be disabled. (Read this section of the Evil Tuning Guide first-- you probably do not need to do this.)

Sun Cluster Integration

ZFS can be used as a failover-only file system with Sun Cluster installations.

If it is deployed on disks also used by Sun Cluster, do not deploy it on any Sun Cluster quorum disks. (A ZFS-owned disk may be promoted to be a quorum disk on current Sun Cluster versions, but adding a disk to a ZFS pool may result in quorum keys being overwritten.)

ZFS Internals

Max Bruning wrote an excellent paper on how to examine the internals of a ZFS data structure. (Look for the article on the ZFS On-Disk Data Walk.) The structure is defined in ZFS On-Disk Specification.

Some key structures:

  • uberblock_t: The starting point when examining a ZFS file system. 128k array of 1k uberblock_t structures, starting at 0x20000 bytes within a vdev label. Defined in uts/common/fs/zfs/sys/uberblock_impl.h Only one uberblock is active at a time; the active uberblock can be found with
    zdb -uuu zpool-name
  • blkptr_t: Locates, describes, and verifies blocks on a disk. Defined in uts/common/fs/zfs/sys/spa.h.
  • dnode_phys_t: Describes an object. Defined by uts/common/fs/zfs/sys/dmu.h
  • objset_phys_t: Describes a group of objects. Defined by uts/common/fs/zfs/sys/dmu_objset.h
  • ZAP Objects: Blocks containing name/value pair attributes. ZAP stands for ZFS Attribute Processor. Defined by uts/common/fs/zfs/sys/zap_leaf.h
  • Bonus Buffer Objects:
    • dsl_dir_phys_t: Contained in a DSL directory dnode_phys_t; contains object ID for a DSL dataset dnode_phys_t
    • dsl_dataset_phys_t: Contained in a DSL dataset dnode_phys_t; contains a blkprt_t pointing indirectly at a second array of dnode_phys_t for objects within a ZFS file system.
    • znode_phys_t: In the bonus buffer of dnode_phys_t structures for files and directories; contains attributes of the file or directory. Similar to a UFS inode in a ZFS context.

[ZFS - Pool, Filesystem, RAID, Snapshots, Clones and Troubleshooting ZFS]

Refer: princeton. edu/~unix/Solaris/troubleshoot/zfs.html

Friday, September 30, 2011

interested monitoring software| Cloud Monitoring: Easily Report on Cloud Applications, Servers and Services

interested monitoring software >>> Cloud Monitoring <<<

Easily Report on Cloud Applications, Servers and Services
  1. Are you experimenting with cloud infrastructure and cloud applications? How are you Monitoring them?

  2. Need to monitor public clouds (like Amazon EC2, Google, Apps, etc)? up.time makes it easy.


When You Start Experimenting with Cloud, up.time Has You Covered

  • Cloud Monitoring ToolKnow what to migrate into the cloud first: Quickly identify candidates for the cloud with detailed deep (and relevant) historical performance metrics that profile workloads in physical and virtual environments. Then easily monitor cloud platforms (EC2) and applications from inside the cloud.
  • Easily monitor and manage cloud, virtual, and physical with a single tool: Cloud monitoring (Amazon EC2), virtual server monitoring (VMware, etc), physical server monitoring (Windows, Solaris, AIX, HP-UX, Linux, Novell NetWare) from a single console. Multiple datacenters, virtualized applications, and cloud-based applications.
  • Too many tools? Eliminate point tool and integration headaches. up.time meets current and future tool needs for physical, virtual and cloud monitoring and management.
  • Build automation that makes scaling up and problem solving simple: Management and monitoring tools need to understand how applications change over time and must trigger workflows to deal with changing workloads. up.time can automatically spin up and monitor virtual or cloud instances to meet capacity demands.
  • Drive SLAs, regardless of virtual, physical or cloud environment: up.time understands the SLA dependencies between applications across physical, virtual and cloud environments. Quickly identify breakdowns in infrastructure and immediately see how they affect the SLAs.
  • Isolate problems quickly: Intelligent alerting eliminates "sea of red," and offers fast root-cause detection and automated actions to stop recurring outages. Customize alerting so only the right person gets the right alert at the right time.
  • Affordable licensing: Complete application and server monitoring. No tiers or management packs and free lifetime upgrades with support. Just count your physical servers (even in virtual and cloud environments!). Installs in minutes; deploys in less than 1 day. Over 700 enterprise customers in 32 countries agree.

Cloud Monitor Download

Free Trial: Download a 30-day Trial of up.time (Virtual Appliance

Also Available)

Monday, September 26, 2011

Cloud Flare ++ It'll supercharge your website.




Cloud Flare?

CloudFlare is a FREE system that acts as a proxy between you and our server. By acting as a proxy, CloudFlare caches static content from the site, which lowers the number of requests to the server, but still allows you to access the site. There are several advantages of the CloudFlare system which are answered below. Cloudflare (like explained above) acts as a proxy to speed up and make it less "heavy" on traffic to the VPS, which means acting like a proxy through the DNS, DDoS attacks are easily diverted keeping the site and forum chugging along nicely.

CloudFlare has just announced that they happened to pick up a cool $20 million in investment last November. Almost a 1000% increase on funds raised in 2009, proving that someone thinks they’re on to something. And Qwerty.ie couldn’t agree more, that’s why we already use CloudFlare!

For those that haven’t heard of CloudFlare yet (the emphasis being on the yet), it provides a distributed network that creates a community environment to help protect your website and all other members within the network. Information about unwanted traffic is shared which means if one website is attacked then that attack can quickly be blocked from accessing everyone in CloudFlare! It’s added security for your website. As if that wasn’t enough, CloudFlare will distribute your website across the network, making it practically indestructible (please do not test this on Qwerty.ie!). Your website will be faster and use less bandwidth.



CloudFlare and W3 Total Cache WordPress Integration

CloudFlare protects and accelerates any website online. Once your website is a part of the CloudFlare community, its web traffic is routed through our intelligent global network. We automatically optimize the delivery of your web pages so your visitors get the fastest page load times and best performance. We also block threats and limit abusive bots and crawlers from wasting your bandwidth and server resources. The result: CloudFlare-powered websites see a significant improvement in performance and a decrease in spam and other attacks.

CloudFlare’s system gets faster and smarter as our community of users grows larger. We have designed the system to scale with our goal in mind: helping power and protect the entire Internet.

CloudFlare can be used by anyone with a website and their own domain, regardless of your choice in platform. From start to finish, setup takes most website owners less than 5 minutes. Adding your website requires only a simple change to your domain’s DNS settings. There is no hardware or software to install or maintain and you do not need to change any of your site’s existing code. If you are ever unhappy you can turn CloudFlare off as easily as you turned it on. Our core service is free and we offer enhanced services for websites who need extra features like real time reporting or SSL.

To get straight to the point of using CloudFlare, here are the advantages:

  • Your website will loads twice as fast
  • Your website will uses 60% less bandwidth
  • Your website will have 65% fewer requests
  • Your website will be way more secure

And all this is for free!



Presentation:
http://cdata.github.com/presentations/what-else-is-cloudflare/

Friday, September 23, 2011

Microsoft Outlook - How to Backup and Restore an archive data.

Copy all items from an archived location

You don't have to use a backup tool to save your data. Instead, you can copy the PST file and save it elsewhere. Find your PST file by doing the following:

1 Go to Account Settings from the Tools menu.
2 Select the Data Files tab and double-click the Personal Folder you wish to backup.
3 Put your cursor into the Filename box, click the box twice to highlight the line and Ctrl+C to copy it.
4 Right-click the Start button and select Explore.
5 Put your cursor into the folder location box and press Ctrl+V to paste the location of the personal folders file.
6 Delete the PST file name (Outlook.pst, for example) and press Enter to find the PST file.
7 Save the file to a network drive or online storage Web site.

To restore the data, do the following:

1 Locate the backup copy of the PST file. Right-click the file and choose Copy.
2 Right-click Start and select Explore.
3 Find the folder where the original PST file lives (default C:\Users\[your user name]\AppData\Local\Microsoft\Outlook\Outlook.pst) and press Ctrl+V to paste the file.
4 Open Outlook to confirm the data appears.

Archive Data

While it's nice to have e-mail messages available when you need them, having too many affects performance. Professional organizers advise getting rid of things you haven't touched in over a year. Archive items that you haven't opened in a year or two. At least, you can always get them later if a time comes when you need them. Rather than having to do the cleaning, let AutoArchive do the dirty work.

Use AutoArchive to automatically move important but infrequently used items to an archive file and to permanently delete expired items. The following is the default location and name for the archive file:

C:\Documents and Settings\[your user name]\Local Settings\Application Data\Microsoft\Outlook\Archive.pst

Archived items appear in the Archives Folder in the Outlook Folder List located in the Navigation Pane.

You may want to backup the archive file the same way as you backup the PST file. Recovering a PST file alone won't recover your archived file. Consider backing up both files at the same time.

You can manually archive items whenever you want by doing the following:

1 Select Archives from the File menu.
2 Select whether to archive folders according to AutoArchive settings or select the folders and subfolders to archive.
3 Select the date for archiving items, and click OK.

You can change the settings for AutoArchives using the following steps:

1 Select Options from the Tools menu.
2 Select the Other tab and click AutoArchive.
3 Modify the AutoArchive options to suit your needs and click OK twice to close the windows.

You can turn off AutoArchive to prevent it from running on all folders. Return to AutoArchives options and uncheck the box next to "Run AutoArchive every n days."

You may have folders with items that should stay in your Personal Folders instead of moving into the Archives Folder. To modify Outlook to skip archiving a folder, do the following:

1 Right-click the folder and select Properties.
2 Select the AutoArchive tab.
3 Select Do not archive items in this folder and click OK

Tuesday, May 10, 2011

Add or delete or set default printer from DOS

Add or delete or set default printer from DOS

Windows XP provides a VBS script to manage local and network printers from Windows command line. This script is Prnmngr.vbs and can be found in the directory C:\Windows\System32. Using this script, we can add or delete network printers and set or change default printer. Below you can find the syntax and use cases of this script along with some examples.

Add a network printer from windows command line

You can add a network printer by specifying the server name and printer name as in the below command.

cscript C:\windows\system32\prnmngr.vbs -p "printername" -s "servername"

Get the list of printers added to the system from Windows command line(CMD).

cscript C:\windows\system32\prnmngr.vbs -l

Get the default printer details from command line.

cscript C:\windows\system32\prnmngr.vbs -g

Example:

C:\>cscript C:\windows\system32\prnmngr.vbs -g
Microsoft (R) Windows Script Host Version 5.6
Copyright (C) Microsoft Corporation 1996-2001. All rights reserved.

The default printer is \\printer01\LASERJET_8150PRINTER

C:\>

Set default printer from windows command line.

You can run the below command to set default printer.

cscript C:\windows\system32\prnmngr.vbs -t -p "\\Servername\printername"

For example, to set the printer laserjet_02 connected to the server \\printer02 as default, we need to run the below command.

cscript C:\windows\system32\prnmngr.vbs -t -p \\printer02\laserjet_02

Note that double quotes are not required if there is no space in the printer name.

Delete a network printer from windows command prompt:

cscript C:\windows\system32\prnmngr.vbs -d "\\servername\printername"