技术改变世界 阅读塑造人生! - shaogx.com

This string was altered by TechBlog\Plugins\Example.; This is an example to show the potential of an offcanvas layout pattern in Bootstrap. Try some responsive-range viewport sizes to see it in action.

android sqlite database

好吧,现在来看一下在android里面使用sqlite数据库的一些基本操作吧。对于大多数app而言对数据库的要求很简单,无非CURD,仅此而已。而我个人比较喜欢将所要使用的相应数据 表封装成javabean再进行操作,这样会显得逻辑比较清晰。 首先,我们写一个Entity:... 全文

android sqlite database

Duplicate From Active Database

RMAN 'Duplicate From Active Database' Feature in 11G (文档 ID 452868.1)转到底部APPLIES TO:Oracle Database - Enterprise Edition - Version 11.1.0.6 and laterInformation in this document applies to any platform.PURPOSEThe scope of this bulletin is to discuss the different type of RMAN 'duplicate database' feature in Oracle 11G.... 全文

Duplicate Active Database

oracle下database link详解

    database link 详解    数据库之间通过创建database link ,可以方便用户对异地数据库中某一用户下数据的进行DML操作,但是不能做DDL操作,    database link 的两种方式:        公有link:public database link(此用户下建的database link ,其他用户也可以使用此link)        私有link:database link        (只有创建该link的用户的才可以使用此link,其他用户则不能使用)实验环境:    window 平台下的oracle 11g  64位(本地数据库)... 全文

oracle下 database link 详解

Oracle Database Links实现方法解析

什么是Database Links呢? 首先我们阐述下它的作用:使用户可以通过一个数据库访问到另外一个远程数据库。 那么Database Link是存储着远程数据库的连接信息。如下图所示:... 全文

Oracle Database Links 实现

云计算利器:Oracle NoSQL Database初体验

在过去几年里,NoSQL数据库的世界里不断涌现出各种新项目,我们时常听到雄心壮志的吹鼓手们拍着胸脯保证他们的新的NoSQL应用是怎样怎样打破了所有的旧框架,能够带来难以想象的性能。事实上呢,有些言过其实了,NoSQL依然无法进入华尔街,即使是新潮的开发者们也只敢把它用在那些人们生活中的无关紧要的琐碎数据上。但是,老式的表式结构确实局限性太大了,如果能够抛弃掉这些,数据库的速度能够得到飞速的提高。... 全文

Oracle NoSQL Database

Oracle数据库如何创建DATABASE LINK?

Oracle数据库如何创建DATABASE LINK呢?本文我们通过一个实例来介绍这一创建过程,接下来就让我们来一起了解一下这一过程吧。... 全文

Oracle DATABASE LINK

mysql database backup use python scripts

mysql database backup use python scripts#!/usr/bin/env python #coding=utf8 #author : itnihao #mail   : itnihao@qq.com #source : http://code.taobao.org/p/python2/src/trunk/ #version:1.0  ''' 功能:mysql备份。用mysqldump对mysql中各库进行备份 1.user,pass,host,path,del参数为变量 2.默认备份删除周期为5天,备份以天为单位 3.默认备份路径为/mysql_backup,如果不存在会建立此目录 4.用法为加执行权限,定时任务运行 ''' import os,subprocess,datetime  '''设置变量''' MYSQL_USER = 'root' MYSQL_PASS = 'pass' MYSQL_HOST = 'localhost' DEL_DAYS   = 5 BACK_PATH  = '/mysql_backup'  '''''时间设置''' CUR_TIME = datetime.date.today() AGO_TIME = datetime.timedelta(days=DEL_DAYS) DEL_TIME = CUR_TIME - AGO_TIME DATABASE = ''   '''备份函数''' def mysqldump():     if os.path.isdir(BACK_PATH):         os.chdir(BACK_PATH)     else:         os.mkdir(BACK_PATH)         os.chdir(BACK_PATH)     database_cmd=subprocess.Popen("mysql -u%s -p%s -h%s -e 'show databases'|grep -v Database|grep -v information" %(MYSQL_USER,MYSQL_PASS,MYSQL_HOST),stdout=subprocess.PIPE,shell=True)     DATABASE_NAME=database_cmd.stdout.read().split()     for DATABASE  in DATABASE_NAME:         MYSQLDUMP_FILENAME="/mysql_backup/%s%s.sql"%(CUR_TIME,DATABASE)         subprocess.call("mysqldump -u%s -p%s -h%s %s>%s" %(MYSQL_USER,MYSQL_PASS,MYSQL_HOST,DATABASE,MYSQLDUMP_FILENAME),shell=True)         if os.path.isfile('${DEL_TIME}${DATABASE}.sql'):             subprocess.call("rm ${DEL_TIME}${DATABASE}.sql",shell=True)  mysqldump() 脚本下载地址http://code.taobao.org/p/python2/src/trunk/mysql_backup.py... 全文

mysql database backup with python scripts

HTML5 Web SQL Database与Indexed Database的CRUD操作

window.location.href='http://www.ibm.com/developerworks/cn/web/1210_jiangjj_html5db/?cmp=dwnpr&cpb=dw&ct=dwcon&cr=cn_51CTO_dl&ccy=cn';【责任编辑:箫韵 TEL:(010)68476606】原文:HTML5 Web SQL Database与Indexed Database的CRUD操作返回开发首页... 全文

IBMdw

oracle drop database

在10g以前,要彻底删除数据库,只有两个方法,一个是利用DBCA图形化工具删除数据库,另外一个就是关闭数据库后,手工删除数据文件、控制文件以及日志文件的方法。从10g开始,Oracle提供了DROP DATABASE的语法,使得数据库的删除变得非常的简单。   不过DROP DATABASE还是有一定的限制条件的:... 全文

数据库 database oracle 休闲 职场

F5支持运行Oracle Database与Oracle WebLogic

【51CTO.com综合报道】 F5 Networks, Inc.在宣布其BIG-IP 解决方案已经通过Oracle Partner Network(OPN)实现了Oracle Database就绪与Oracle WebLogic就绪支持。这一成就表明,作为Oracle Partner Network的黄金级别会员,F5已经在Oracle Database 11g Release 2与Oracle WebLogic Server 11g Release 1上实现了完全的BIG-IP version 10产品测试与支持。... 全文

F5 Oracle Database Oracle WebLogic

简单介绍一下Oracle创建Database Link的两种方式

Oracle数据库如何创建Database Link呢?本文我们主要就介绍一下这部分内容,Oracle数据库创建Database Link有两种方式,一种是通过菜单,一种是通过SQL。创建一个dblink,命名为dblink_name,从A数据库连到B数据库,B数据库的IP为192.168.1.73,端口为1521,实例名为oracle,登录名为tast,密码为test。菜单方式:打开plsql,点击【File】-【New】-【Database link】,打开如下图所示窗口... 全文

Oracle Database Link

Oracle:启动Database Control时出错

这几天在看Oracle相关的东西,就在windows server 2008 R2上安装了oracle 11 R2,软件安装成功的时候提示了一个错误,搞的心里有点阴影,就暂且先放下,最后再来解决。 后来就搜索了一下,结果有人说是防火墙的问题,我打开防火墙,果然是开启的,我关闭了防火墙,进入到了Oracle程序的BIN目录,... 全文

oracle 11g errorcontrol database

Oracle 11g Data Guard 使用duplicate from active database 创建 standby database

      用这种方式来搭建DG ,主库的停机时间很少,只需要重启一下,使参数生效。也可以用这种方法进行DB迁移。DG搭建好,然后把备库激活就可以了。 这样整个迁移中宕机时间也比较短。        Oracle 11g的pyhsical standby 支持open read only 下的apply和Real-time query。 因此就有了physical standby 稳定和logical standby 的报表查询功能。 Oracle: 11.2.0.1OS: redhat 5.5Primary IP: 192.168.2.42DB_NAME=sanfu Standby IP: 192.168.2.43DB_NAME=sanfu ... 全文

oracle adg

Troubleshooting Exchange 2007 Store Log/Database growth issues

 One of the common issues we see in support is excessive Database and/or Transaction log growth problems. If you have ever run in to one of these issues, you will find that they are not always easy to troubleshoot as there are many tools that are needed to help understand where the problem might be coming from. Customers have asked why does the Server allow these type of operations to occur in the first place and why is the Exchange Server not resilient to this? That is not always an easy question to answer as there as so many variables as to why this may occur in the first place ranging from faulty Outlook Add-ins, Custom or 3rd party applications, corrupted rules, corrupted messages, online maintenance not running long enough to properly maintain your database, and the list goes on and on.Once an Outlook client has created a profile to the Exchange server, they pretty much have full reign to do whatever actions they want within that MAPI profile. This of course, will be controlled mostly by your Organizations mailbox and message size limits and some of the Client throttling or backoff features that are new to Exchange 2007. Since I have dealt with these type problems in great detail, I thought it would be helpful to share some troubleshooting steps with you that may help you collect, detect and mitigate these problems when and if you should see them. General Troubleshooting Use Exchange User Monitor (Exmon) server side to determine if a specific user is causing the log growth problems. Sort on CPU (%) and look at the top 5 users that are consuming the most amount of CPU inside the Store process. Check the Log Bytes column to verify for this log growth for a potential user. If that does not show a possible user, sort on the Log Bytes column to look for any possible users that could be attributing to the log growth If it appears that the user in Exmon is a ?, then this is representative of a HUB/Transport related problem generating the logs. Query the message tracking logs using the Message Tracking Log tool in the Exchange Management Consoles Toolbox to check for any large messages that might be running through the system. See step 5.9for a Powershell script to accomplish the same task. If suspected user is found via Exmon, then do one of the following:Disable the users AD account temporarilyKill their TCP connection with TCPViewCall the client to have them close Outlook in the condition state for immediate relief. If closing the client down seems to stop the log growth issue, then we need to do the following to see if this is OST or Outlook profile related:Have the user launch Outlook whileholding down the control key which will prompt if you would like to run Outlook in safe mode. If launching Outlook in safe mode resolves the log growth issue, then concentrate on what add-ins could be attributing to this problem.If you can gain access to the users machine, then do one of the following:Launch Outlook to confirm the log file growth issue on the server.If log growth is confirmed, do one of the followingCheck users Outbox for any messages. If user is running in Cached mode, set the Outlook client to Work Offline. Doing this will help stop the message being sent in the outbox and sometimes causes the message to NDR.If user is running in Online Mode, then try moving the message to another folder to prevent Outlook or the HUB server from processing the message.After each one of the steps above, check the Exchange server to see if log growth has ceasedCall Microsoft Product Support to enable debug logging of the Outlook client to determine possible root cause.Follow the Running Process Explorer instructions in the below article to dump out dlls that are running within the Outlook Process. Name the file username.txt. This helps check for any 3rd party Outlook Add-ins that may be causing the excessive log growth. 970920  Using Process Explorer to List dlls Running Under the Outlook.exe Process http://support.microsoft.com/kb/970920Check the Sync Issues folder for any errors that might be occurringLet’s attempt to narrow this down further to see if the problem is truly in the OST or something possibly Outlook Profile related:Run ScanPST against the users OST file to check for possible corruption.With the Outlook client shut down, rename the users OST file to something else and then launch Outlook to recreate a new OST file. If the problem does not occur, we know the problem is within the OST itself.If renaming the OST causes the problem to recur again, then recreate the users profile to see if this might be profile related. Ask Questions: Is the user using any type of mobile device?Question the end user if at all possible to understand what they might have been doing at the time the problem started occurring. It’s possible that a user imported a lot of data from a PST file which could cause log growth server side or there was some other erratic behavior that they were seeing based on a user action.If Exmon does not provide the data that is necessary to get root cause, then do the following:Check current queues against all HUB Transport Servers for stuck or queued messages get-exchangeserver | where {$_.IsHubTransportServer -eq "true"} | Get-Queue | where {$_.Deliverytype –eq “MapiDelivery”} | Select-Object Identity, NextHopDomain, Status, MessageCount | export-csv  HubQueues.csv Review queues for any that are in retry or have a lot of messages queued. Export out message sizes in MB in all Hub Transport queues to see if any large messages are being sent through the queues. get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,@{Name="Message Size MB";expression={$_.size.toMB()}} | sort-object -property size –descending | export-csv HubMessages.csv   Export out message sizes in Bytes in all Hub Transport queues. get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,size | sort-object -property size –descending | export-csv HubMessages.csv Check Users Outbox for any large, looping, or stranded messages that might be affecting overall Log Growth. get-mailbox -ResultSize Unlimited| Get-MailboxFolderStatistics -folderscope Outbox | Sort-Object Foldersize -Descending | select-object identity,name,foldertype,itemsinfolder,@{Name="FolderSize MB";expression={$_.folderSize.toMB()}} | export-csv OutboxItems.csv Note: This does not get information for users that are running in cached mode.Utilize the MSExchangeIS Client\Jet Log Record Bytes/sec and MSExchangeIS Client\RPC Operations/sec Perfmon counters to see if there is a particular client protocol that may be generating excessive logs. If a particular protocol mechanism if found to be higher than other protocols for a sustained period of time, then possibly shut down the service hosting the protocol. For example, if Exchange Outlook Web Access is the protocol generating potential log growth, then stopping the World Wide Web Service (W3SVC) to confirm that log growth stops. If log growth stops, then collecting IIS logs from the CAS/MBX Exchange servers involved will help provide insight in to what action the user was performing that was causing this occur. Run the following command from the Management shell to export out current user operation rates: To export to CSV File: get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount,progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| export-csv LogonStats.csv To view realtime data: get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount,progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| ft Key things to look for: In the below example, the Administrator account was storming the testuser account with email. You will notice that there are 2 users that are active here, one is the Administrator submitting all of the messages and then you will notice that the Windows2000Account references a HUB server referencing an Identity of testuser. The HUB server also has *no* UserName either, so that is a giveaway right there. This can give you a better understanding of what parties are involved in these high rates of operations UserName : Administrator Windows2000Account : DOMAIN\Administrator Identity : /o=First Organization/ou=First Administrative Group/cn=Recipients/cn=Administrator MessagingOperationCount : 1724 OtherOperationCount : 384 ProgressOperationCount : 0 StreamOperationCount : 0 TableOperationCount : 576 TotalOperationCount : 2684 UserName : Windows2000Account : DOMAIN\E12-HUB$ Identity : /o= First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testuser MessagingOperationCount : 630 OtherOperationCount : 361 ProgressOperationCount : 0 StreamOperationCount : 0 TableOperationCount : 0 TotalOperationCount : 1091 Enable Perfmon/Perfwiz logging on the server. Collect data through the problem times and then review for any irregular activities. You can grab some pre-canned Perfmon import files at http://blogs.technet.com/mikelag/archive/2008/05/02/perfwiz-replacement-for-exchange-2007.aspx to make collecting this data easier.Run ExTRA (Exchange Troubleshooting Assistant) via the Toolbox in the Exchange Management Console to look for any possible Functions (via FCL Logging) that may be consuming Excessive times within the store process. This needs to be launched during the problem period. http://blogs.technet.com/mikelag/archive/2008/08/21/using-extra-to-find-long-running-transactions-inside-store.aspx shows how to use FCL logging only, but it would be best to include Perfmon, Exmon, and FCL logging via this tool to capture the most amount of data. Dump the store process during the time of the log growth. (Use this as a last measure once all prior activities have been exhausted and prior to calling Microsoft for assistance. These issues are sometimes intermittent, and the quicker you can obtain any data from the server, the better as this will help provide Microsoft with information on what the underlying cause might be.)Download the Current Release version of the Windows debuggers from http://www.microsoft.com/whdc/devtools/debugging/install64bit.mspx and select a custom installation and change the directory to install the debuggers to c:\debuggers and finish the installation.Open the command prompt and change in to the c:\Debuggers directoryType cscript adplus.vbs –hang –pn store –quiet –o d:\DebugData. Note: -o switch signifies the location in which you want to store the debug data that has sufficient drive space. Important: Once this has launched, a minimized CDB window will open. Please wait for this to complete and do not close this window as this will disappear once the dump has completed.Wait 2 minutes and perform the same dump operation again.Open a case with Microsoft Product Support Services to get this data looked at.Collect a portion of Store transaction log files (100 would be good) during the problem period and parse them following the directions in http://blogs.msdn.com/scottos/archive/2007/11/07/remix-using-powershell-to-parse-ese-transaction-logs.aspx to look for possible patterns such as high pattern counts for IPM.Appointment. This will give you a high level overview if something is looping or a high rate of messages being sent. Note: This tool may or may not provide any benefit depending on the data that is stored in the log files, but sometimes will show data that is MIME encoded that will help with your investigationExport out Message tracking log data from affected MBX server Method 1 Download the attached ExLogGrowthCollector.zip file to this post and extract to the MBX server that experienced the issue. Run ExLogGrowthCollector.ps1 from the Exchange Management Shell. Enter in the MBX server name that you would like to trace, the Start and End times and click on the Collect Logs button. Note: What this script does is to export out all mail traffic to/from the specified mailbox server across all HUB servers between the times specified. This helps provide insight in to any large or looping messages that might have been sent that could have caused the log growth issue. Method 2 Copy/Paste the following data in to notepad, save as msgtrackexport.ps1 and then run this on the affected Mailbox Server. Open in Excel for review. This is similar to the GUI version, but requires manual editing to get it to work.#Export Tracking Log data from affected server specifying Start/End Times Write-host "Script to export out Mailbox Tracking Log Information" Write-Host "#####################################################" Write-Host $server = Read-Host "Enter Mailbox server Name" $start = Read-host "Enter start date and time in the format of MM/DD/YYYY hh:mmAM" $end = Read-host "Enter send date and time in the format of MM/DD/YYYY hh:mmPM" $fqdn = $(get-exchangeserver $server).fqdn Write-Host "Writing data out to csv file..... " Get-ExchangeServer | where {$_.IsHubTransportServer -eq "True" -or $_.name -eq "$server"} | Get-MessageTrackingLog -ResultSize Unlimited -Start $start -End $end  | where {$_.ServerHostname -eq $server -or $_.clienthostname -eq $server -or $_.clienthostname -eq $fqdn} | sort-object totalbytes -Descending | export-csv MsgTrack.csv -NoType Write-Host "Completed!! You can now open the MsgTrack.csv file in Excel for review"Method 3You can also use the Process Tracking Log Tool at http://msexchangeteam.com/archive/2008/02/07/448082.aspx to provide some very useful reports.Save off a copy of the application/system logs from the affected server and review them for any events that could attribute to this problemEnable IIS extended logging for CAS and MB server roles to add the sc-bytes and cs-bytes fields to track large messages being sent via IIS protocols and to also track usage patterns. Proactive monitoring and mitigation efforts Increase Diagnostics Logging for the following objects depending on what stores are being affected: MSExchangeIS\Mailbox\Rules MSExchangeIS\PublicFolders\Rules Enable Client Side monitoring per http://technet.microsoft.com/en-us/library/cc540465.aspxCreate a monitoring plan using MOM/SCOM to alert when the amount of Log Bytes being written hit a specific threshold and then alert the messaging team for further action. There are thresholds that are a part of the Exchange 2007 Management Pack that could help alert to these type situations before the problem gets to a point of taking a database offline. Here are 2 examples of this. ESE Log Byte Write/sec MOM threshold Warning Event http://technet.microsoft.com/en-us/library/bb218522.aspxError Event http://technet.microsoft.com/en-us/library/bb218733.aspxIf an alert is raised, then perform an operation to start collecting data. Ensure http://support.microsoft.com/kb/958701 is installed at a minimum for each Outlook 2003 client to address known log/database growth issues for users streaming data to the information store that have exceeded message size limits. This fix also addresses a problem where clients could copy a message to their inbox from a PST that during the sync process could exceed mailbox limits, thus causing excessive log growth problems on the server. These hotfixes make use of the PR_PROHIBIT_SEND_QUOTA and PR_MAX_SUBMIT_MESSAGE_SIZE  which is referenced in http://support.microsoft.com/kb/894795Additional Outlook Log Growth fixes: http://support.microsoft.com/kb/957142http://support.microsoft.com/kb/936184Implement minimum Outlook Client versions that can connect to the Exchange server via the Disable MAPI clients registry key server side. See http://technet.microsoft.com/en-us/library/bb266970.aspx for more information. To disable clients less than Outlook 2003 SP2, use the following entries on an Exchange 2007 server "-5.9.9;7.0.0-11.6568.6567" Setting this to exclude Outlook client versions less than Outlook 2003 SP2 will help protect against stream issues to the store. Reason being is that Outlook 2003 SP2 and later understand the new quota properties that were introduced in to the store in http://support.microsoft.com/kb/894795. Older clients have no idea what these new properties are, so if a user sent a 600MB attachment on a message, it would stream the entire message to the store generating excessive log files and then get NDR’ed once the message size limits were checked. With SP2 installed, the Outlook client will first check to see if the attachment size is over the set quota for the organization and immediately stop the send with a warning message on the client and prevent the stream from being sent to the server.Allowing any clients older than SP2 to connect to the store is leaving the Exchange servers open for a growth issue. If Entourage clients are being utilized, then implement the MaxRequestEntityAllowed property in http://support.microsoft.com/kb/935848  to address a known issue where sending a message over the size limit could potentially create log growth for a database. Check to ensure File Level Antivirus exclusions are set correctly for both files and processes per http://technet.microsoft.com/en-us/library/bb332342.aspxEnable Content Conversion tracing on all HUB servers per http://technet.microsoft.com/en-us/library/bb397226.aspx . This will help log any failed conversion attempts that may be causing the log growth problem to occur. If POP3 or IMAP4 clients are connecting to specific servers, then implementing Protocol Logging for each on the servers that may be making use of these protocols will help log data to a log file where these protocols are causing excessive log growth spurts. See http://technet.microsoft.com/en-us/library/aa997690.aspx on how to enable this logging. Ensure Online maintenance is completing a pass for each database within the past week or two. Query Application event logs for the ESE events series 700 through 704 to clarify. If log growth issues occur during online maintenance periods, this could be normal as Exchange shuffles data around in the database. We just need to ensure that we keep this part in mind during these log growth problems. Check for any excessive ExCDO warning events related to appointments in the application log on the server. (Examples are 8230 or 8264 events). http://support.microsoft.com/kb/947014 is just one example of this issue. If recurrence meeting events are found, then try to regenerate calendar data server side via a process called POOF.  See http://blogs.msdn.com/stephen_griffin/archive/2007/02/21/poof-your-calender-really.aspx for more information on what this is. Event Type: Warning Event Source: EXCDO Event Category: General Event ID: 8230 Description: An inconsistency was detected in username@domain.com: /Calendar/<calendar item> .EML. The calendar is being repaired. If other errors occur with this calendar, please view the calendar using Microsoft Outlook Web Access. If a problem persists, please recreate the calendar or the containing mailbox. Event Type: Warning Event ID : 8264 Category : General Source : EXCDO Type : Warning Message : The recurring appointment expansion in mailbox <someone's address> has taken too long. The free/busy information for this calendar may be inaccurate. This may be the result of many very old recurring appointments. To correct this, please remove them or change their start date to a more recent date. Important: If 8230 events are consistently seen on an Exchange server, have the user delete/recreate that appointment to remove any corruptionAdd additional store logging per http://support.microsoft.com/kb/254606 to add more performance counter data to be collected with Perfmon. This will allow us to utilize counters such as ImportDeleteOpRate and SaveChangesMessageOpRates which allows us to see what these common log growth rates are.  Recommend forcing end dates on recurring meetings.  This can be done through the usage of the registry key DisableRecurNoEnd (DWORD). For Outlook 2003: http://support.microsoft.com/kb/952144HKEY_CURRENT_USER\Software\Microsoft\Office\11.0\Outlook\Preferences For Outlook 2007: http://support.microsoft.com/kb/955449HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\Preferences Value: 1 to Enable, 0 to Disable Implement LimitEmbeddingDepth on the Exchange servers as outlined in KB 833607 to prevent log growth due to recursion looping. Note: This article states this if for Exchange 2000-2003, but the key is also still valid in Exchange 2007 per source code Known Issues Exchange Server SP1 Release Update 9 fixes959559 - Transaction log files grow unexpectedly in an Exchange Server 2007 Service Pack 1 mailbox server on a computer that is running Windows Server 2008 925252 - The Store.exe process uses almost 100 percent of CPU resources, and the size of the public folder store increases quickly in Exchange Server 2007 961124 - Some messages are stuck in the Outbox folder or the Drafts folder on a computer that is running Exchange Server 2007 Service Pack 1 970725 - Public folder replication messages stay in the local delivery queue and cause an Exchange Server 2007 Service Pack 1 database to grow quickly SP1 Release Update 8 fixes960775 - You receive a "Message too large for this recipient" NDR that has the original message attached after you restrict the Maximum Message Send Size value in Exchange Server 2007 SP1 Release Update 7 fixes957124 - You do not receive an NDR message even though your meeting request cannot be sent successfully to a recipient 960775 - You receive a "Message too large for this recipient" NDR that has the original message attached after you restrict the Maximum Message Send Size value in Exchange Server 2007 SP1 Release Update 1 fixes947014 - An Exchange Server 2007 mailbox server randomly generates many transaction logs in an Exchange Server 2007 Service Pack 1 environment 943371 - Event IDs 8206, 8213, and 8199 are logged in an Exchange Server 2007 environment Outlook 2007970944 – Installing this hotfix package addresses and issue where log files are generated unexpectedly when a user is running Outlook 2007 in the cached Exchange mode and sends an e-mail message to the recipients who have a corrupted e-mail address and/or e-mail address  Outlook 2003958701 - Description of the Outlook 2003 Post-Service Pack 3 hotfix package (Engmui.msp, Olkintl.msp, Outlook.msp): October 28, 2008 936184 - Description of the Outlook 2003 post-Service Pack 3 hotfix package: December 14, 2007 897247 - Description of the Microsoft Office Outlook 2003 post-Service Pack 1 hotfix package: May 2, 2005 Entourage935848 - Various performance issues occur when you use Entourage for Mac to send large e-mail messages to an Exchange 2007 server Windows 2008955612 - The "LCMapString" function may return incorrect mapping results for some languages in Windows Server 2008 and in Windows Vista One of the common issues we see in support is excessive Database and/or Transaction log growth problems. If you have ever run in to one of these issues, you will find that they are not always easy to troubleshoot as there are many tools that are needed to help understand where the problem might be coming from. Customers have asked why does the Server allow these type of operations to occur in the first place and why is the Exchange Server not resilient to this? That is not always an easy question to answer as there as so many variables as to why this may occur in the first place ranging from faulty Outlook Add-ins, Custom or 3rd party applications, corrupted rules, corrupted messages, online maintenance not running long enough to properly maintain your database, and the list goes on and on.Once an Outlook client has created a profile to the Exchange server, they pretty much have full reign to do whatever actions they want within that MAPI profile. This of course, will be controlled mostly by your Organizations mailbox and message size limits and some of the Client throttling or backoff features that are new to Exchange 2007. Since I have dealt with these type problems in great detail, I thought it would be helpful to share some troubleshooting steps with you that may help you collect, detect and mitigate these problems when and if you should see them. General Troubleshooting Use Exchange User Monitor (Exmon) server side to determine if a specific user is causing the log growth problems. Sort on CPU (%) and look at the top 5 users that are consuming the most amount of CPU inside the Store process. Check the Log Bytes column to verify for this log growth for a potential user. If that does not show a possible user, sort on the Log Bytes column to look for any possible users that could be attributing to the log growth If it appears that the user in Exmon is a ?, then this is representative of a HUB/Transport related problem generating the logs. Query the message tracking logs using the Message Tracking Log tool in the Exchange Management Consoles Toolbox to check for any large messages that might be running through the system. See step 5.9for a Powershell script to accomplish the same task. If suspected user is found via Exmon, then do one of the following:Disable the users AD account temporarilyKill their TCP connection with TCPViewCall the client to have them close Outlook in the condition state for immediate relief. If closing the client down seems to stop the log growth issue, then we need to do the following to see if this is OST or Outlook profile related:Have the user launch Outlook whileholding down the control key which will prompt if you would like to run Outlook in safe mode. If launching Outlook in safe mode resolves the log growth issue, then concentrate on what add-ins could be attributing to this problem.If you can gain access to the users machine, then do one of the following:Launch Outlook to confirm the log file growth issue on the server.If log growth is confirmed, do one of the followingCheck users Outbox for any messages. If user is running in Cached mode, set the Outlook client to Work Offline. Doing this will help stop the message being sent in the outbox and sometimes causes the message to NDR.If user is running in Online Mode, then try moving the message to another folder to prevent Outlook or the HUB server from processing the message.After each one of the steps above, check the Exchange server to see if log growth has ceasedCall Microsoft Product Support to enable debug logging of the Outlook client to determine possible root cause.Follow the Running Process Explorer instructions in the below article to dump out dlls that are running within the Outlook Process. Name the file username.txt. This helps check for any 3rd party Outlook Add-ins that may be causing the excessive log growth. 970920  Using Process Explorer to List dlls Running Under the Outlook.exe Process http://support.microsoft.com/kb/970920Check the Sync Issues folder for any errors that might be occurringLet’s attempt to narrow this down further to see if the problem is truly in the OST or something possibly Outlook Profile related:Run ScanPST against the users OST file to check for possible corruption.With the Outlook client shut down, rename the users OST file to something else and then launch Outlook to recreate a new OST file. If the problem does not occur, we know the problem is within the OST itself.If renaming the OST causes the problem to recur again, then recreate the users profile to see if this might be profile related. Ask Questions: Is the user using any type of mobile device?Question the end user if at all possible to understand what they might have been doing at the time the problem started occurring. It’s possible that a user imported a lot of data from a PST file which could cause log growth server side or there was some other erratic behavior that they were seeing based on a user action.If Exmon does not provide the data that is necessary to get root cause, then do the following:Check current queues against all HUB Transport Servers for stuck or queued messages get-exchangeserver | where {$_.IsHubTransportServer -eq "true"} | Get-Queue | where {$_.Deliverytype –eq “MapiDelivery”} | Select-Object Identity, NextHopDomain, Status, MessageCount | export-csv  HubQueues.csv Review queues for any that are in retry or have a lot of messages queued. Export out message sizes in MB in all Hub Transport queues to see if any large messages are being sent through the queues. get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,@{Name="Message Size MB";expression={$_.size.toMB()}} | sort-object -property size –descending | export-csv HubMessages.csv   Export out message sizes in Bytes in all Hub Transport queues. get-exchangeserver | where {$_.ishubtransportserver -eq "true"} | get-message –resultsize unlimited | Select-Object Identity,Subject,status,LastError,RetryCount,queue,size | sort-object -property size –descending | export-csv HubMessages.csv Check Users Outbox for any large, looping, or stranded messages that might be affecting overall Log Growth. get-mailbox -ResultSize Unlimited| Get-MailboxFolderStatistics -folderscope Outbox | Sort-Object Foldersize -Descending | select-object identity,name,foldertype,itemsinfolder,@{Name="FolderSize MB";expression={$_.folderSize.toMB()}} | export-csv OutboxItems.csv Note: This does not get information for users that are running in cached mode.Utilize the MSExchangeIS Client\Jet Log Record Bytes/sec and MSExchangeIS Client\RPC Operations/sec Perfmon counters to see if there is a particular client protocol that may be generating excessive logs. If a particular protocol mechanism if found to be higher than other protocols for a sustained period of time, then possibly shut down the service hosting the protocol. For example, if Exchange Outlook Web Access is the protocol generating potential log growth, then stopping the World Wide Web Service (W3SVC) to confirm that log growth stops. If log growth stops, then collecting IIS logs from the CAS/MBX Exchange servers involved will help provide insight in to what action the user was performing that was causing this occur. Run the following command from the Management shell to export out current user operation rates: To export to CSV File: get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount,progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| export-csv LogonStats.csv To view realtime data: get-logonstatistics |select-object username,Windows2000account,identity,messagingoperationcount,otheroperationcount,progressoperationcount,streamoperationcount,tableoperationcount,totaloperationcount | where {$_.totaloperationcount -gt 1000} | sort-object totaloperationcount -descending| ft Key things to look for: In the below example, the Administrator account was storming the testuser account with email. You will notice that there are 2 users that are active here, one is the Administrator submitting all of the messages and then you will notice that the Windows2000Account references a HUB server referencing an Identity of testuser. The HUB server also has *no* UserName either, so that is a giveaway right there. This can give you a better understanding of what parties are involved in these high rates of operations UserName : Administrator Windows2000Account : DOMAIN\Administrator Identity : /o=First Organization/ou=First Administrative Group/cn=Recipients/cn=Administrator MessagingOperationCount : 1724 OtherOperationCount : 384 ProgressOperationCount : 0 StreamOperationCount : 0 TableOperationCount : 576 TotalOperationCount : 2684 UserName : Windows2000Account : DOMAIN\E12-HUB$ Identity : /o= First Organization/ou=Exchange Administrative Group (FYDIBOHF23SPDLT)/cn=Recipients/cn=testuser MessagingOperationCount : 630 OtherOperationCount : 361 ProgressOperationCount : 0 StreamOperationCount : 0 TableOperationCount : 0 TotalOperationCount : 1091 Enable Perfmon/Perfwiz logging on the server. Collect data through the problem times and then review for any irregular activities. You can grab some pre-canned Perfmon import files at http://blogs.technet.com/mikelag/archive/2008/05/02/perfwiz-replacement-for-exchange-2007.aspx to make collecting this data easier.Run ExTRA (Exchange Troubleshooting Assistant) via the Toolbox in the Exchange Management Console to look for any possible Functions (via FCL Logging) that may be consuming Excessive times within the store process. This needs to be launched during the problem period. http://blogs.technet.com/mikelag/archive/2008/08/21/using-extra-to-find-long-running-transactions-inside-store.aspx shows how to use FCL logging only, but it would be best to include Perfmon, Exmon, and FCL logging via this tool to capture the most amount of data. Dump the store process during the time of the log growth. (Use this as a last measure once all prior activities have been exhausted and prior to calling Microsoft for assistance. These issues are sometimes intermittent, and the quicker you can obtain any data from the server, the better as this will help provide Microsoft with information on what the underlying cause might be.)Download the Current Release version of the Windows debuggers from http://www.microsoft.com/whdc/devtools/debugging/install64bit.mspx and select a custom installation and change the directory to install the debuggers to c:\debuggers and finish the installation.Open the command prompt and change in to the c:\Debuggers directoryType cscript adplus.vbs –hang –pn store –quiet –o d:\DebugData. Note: -o switch signifies the location in which you want to store the debug data that has sufficient drive space. Important: Once this has launched, a minimized CDB window will open. Please wait for this to complete and do not close this window as this will disappear once the dump has completed.Wait 2 minutes and perform the same dump operation again.Open a case with Microsoft Product Support Services to get this data looked at.Collect a portion of Store transaction log files (100 would be good) during the problem period and parse them following the directions in http://blogs.msdn.com/scottos/archive/2007/11/07/remix-using-powershell-to-parse-ese-transaction-logs.aspx to look for possible patterns such as high pattern counts for IPM.Appointment. This will give you a high level overview if something is looping or a high rate of messages being sent. Note: This tool may or may not provide any benefit depending on the data that is stored in the log files, but sometimes will show data that is MIME encoded that will help with your investigationExport out Message tracking log data from affected MBX server Method 1 Download the attached ExLogGrowthCollector.zip file to this post and extract to the MBX server that experienced the issue. Run ExLogGrowthCollector.ps1 from the Exchange Management Shell. Enter in the MBX server name that you would like to trace, the Start and End times and click on the Collect Logs button. Note: What this script does is to export out all mail traffic to/from the specified mailbox server across all HUB servers between the times specified. This helps provide insight in to any large or looping messages that might have been sent that could have caused the log growth issue. Method 2 Copy/Paste the following data in to notepad, save as msgtrackexport.ps1 and then run this on the affected Mailbox Server. Open in Excel for review. This is similar to the GUI version, but requires manual editing to get it to work.#Export Tracking Log data from affected server specifying Start/End Times Write-host "Script to export out Mailbox Tracking Log Information" Write-Host "#####################################################" Write-Host $server = Read-Host "Enter Mailbox server Name" $start = Read-host "Enter start date and time in the format of MM/DD/YYYY hh:mmAM" $end = Read-host "Enter send date and time in the format of MM/DD/YYYY hh:mmPM" $fqdn = $(get-exchangeserver $server).fqdn Write-Host "Writing data out to csv file..... " Get-ExchangeServer | where {$_.IsHubTransportServer -eq "True" -or $_.name -eq "$server"} | Get-MessageTrackingLog -ResultSize Unlimited -Start $start -End $end  | where {$_.ServerHostname -eq $server -or $_.clienthostname -eq $server -or $_.clienthostname -eq $fqdn} | sort-object totalbytes -Descending | export-csv MsgTrack.csv -NoType Write-Host "Completed!! You can now open the MsgTrack.csv file in Excel for review"Method 3You can also use the Process Tracking Log Tool at http://msexchangeteam.com/archive/2008/02/07/448082.aspx to provide some very useful reports.Save off a copy of the application/system logs from the affected server and review them for any events that could attribute to this problemEnable IIS extended logging for CAS and MB server roles to add the sc-bytes and cs-bytes fields to track large messages being sent via IIS protocols and to also track usage patterns. Proactive monitoring and mitigation efforts Increase Diagnostics Logging for the following objects depending on what stores are being affected: MSExchangeIS\Mailbox\Rules MSExchangeIS\PublicFolders\Rules Enable Client Side monitoring per http://technet.microsoft.com/en-us/library/cc540465.aspxCreate a monitoring plan using MOM/SCOM to alert when the amount of Log Bytes being written hit a specific threshold and then alert the messaging team for further action. There are thresholds that are a part of the Exchange 2007 Management Pack that could help alert to these type situations before the problem gets to a point of taking a database offline. Here are 2 examples of this. ESE Log Byte Write/sec MOM threshold Warning Event http://technet.microsoft.com/en-us/library/bb218522.aspxError Event http://technet.microsoft.com/en-us/library/bb218733.aspxIf an alert is raised, then perform an operation to start collecting data. Ensure http://support.microsoft.com/kb/958701 is installed at a minimum for each Outlook 2003 client to address known log/database growth issues for users streaming data to the information store that have exceeded message size limits. This fix also addresses a problem where clients could copy a message to their inbox from a PST that during the sync process could exceed mailbox limits, thus causing excessive log growth problems on the server. These hotfixes make use of the PR_PROHIBIT_SEND_QUOTA and PR_MAX_SUBMIT_MESSAGE_SIZE  which is referenced in http://support.microsoft.com/kb/894795Additional Outlook Log Growth fixes: http://support.microsoft.com/kb/957142http://support.microsoft.com/kb/936184Implement minimum Outlook Client versions that can connect to the Exchange server via the Disable MAPI clients registry key server side. See http://technet.microsoft.com/en-us/library/bb266970.aspx for more information. To disable clients less than Outlook 2003 SP2, use the following entries on an Exchange 2007 server "-5.9.9;7.0.0-11.6568.6567" Setting this to exclude Outlook client versions less than Outlook 2003 SP2 will help protect against stream issues to the store. Reason being is that Outlook 2003 SP2 and later understand the new quota properties that were introduced in to the store in http://support.microsoft.com/kb/894795. Older clients have no idea what these new properties are, so if a user sent a 600MB attachment on a message, it would stream the entire message to the store generating excessive log files and then get NDR’ed once the message size limits were checked. With SP2 installed, the Outlook client will first check to see if the attachment size is over the set quota for the organization and immediately stop the send with a warning message on the client and prevent the stream from being sent to the server.Allowing any clients older than SP2 to connect to the store is leaving the Exchange servers open for a growth issue. If Entourage clients are being utilized, then implement the MaxRequestEntityAllowed property in http://support.microsoft.com/kb/935848  to address a known issue where sending a message over the size limit could potentially create log growth for a database. Check to ensure File Level Antivirus exclusions are set correctly for both files and processes per http://technet.microsoft.com/en-us/library/bb332342.aspxEnable Content Conversion tracing on all HUB servers per http://technet.microsoft.com/en-us/library/bb397226.aspx . This will help log any failed conversion attempts that may be causing the log growth problem to occur. If POP3 or IMAP4 clients are connecting to specific servers, then implementing Protocol Logging for each on the servers that may be making use of these protocols will help log data to a log file where these protocols are causing excessive log growth spurts. See http://technet.microsoft.com/en-us/library/aa997690.aspx on how to enable this logging. Ensure Online maintenance is completing a pass for each database within the past week or two. Query Application event logs for the ESE events series 700 through 704 to clarify. If log growth issues occur during online maintenance periods, this could be normal as Exchange shuffles data around in the database. We just need to ensure that we keep this part in mind during these log growth problems. Check for any excessive ExCDO warning events related to appointments in the application log on the server. (Examples are 8230 or 8264 events). http://support.microsoft.com/kb/947014 is just one example of this issue. If recurrence meeting events are found, then try to regenerate calendar data server side via a process called POOF.  See http://blogs.msdn.com/stephen_griffin/archive/2007/02/21/poof-your-calender-really.aspx for more information on what this is. Event Type: Warning Event Source: EXCDO Event Category: General Event ID: 8230 Description: An inconsistency was detected in username@domain.com: /Calendar/<calendar item> .EML. The calendar is being repaired. If other errors occur with this calendar, please view the calendar using Microsoft Outlook Web Access. If a problem persists, please recreate the calendar or the containing mailbox. Event Type: Warning Event ID : 8264 Category : General Source : EXCDO Type : Warning Message : The recurring appointment expansion in mailbox <someone's address> has taken too long. The free/busy information for this calendar may be inaccurate. This may be the result of many very old recurring appointments. To correct this, please remove them or change their start date to a more recent date. Important: If 8230 events are consistently seen on an Exchange server, have the user delete/recreate that appointment to remove any corruptionAdd additional store logging per http://support.microsoft.com/kb/254606 to add more performance counter data to be collected with Perfmon. This will allow us to utilize counters such as ImportDeleteOpRate and SaveChangesMessageOpRates which allows us to see what these common log growth rates are.  Recommend forcing end dates on recurring meetings.  This can be done through the usage of the registry key DisableRecurNoEnd (DWORD). For Outlook 2003: http://support.microsoft.com/kb/952144HKEY_CURRENT_USER\Software\Microsoft\Office\11.0\Outlook\Preferences For Outlook 2007: http://support.microsoft.com/kb/955449HKEY_CURRENT_USER\Software\Microsoft\Office\12.0\Outlook\Preferences Value: 1 to Enable, 0 to Disable Implement LimitEmbeddingDepth on the Exchange servers as outlined in KB 833607 to prevent log growth due to recursion looping. Note: This article states this if for Exchange 2000-2003, but the key is also still valid in Exchange 2007 per source code Known Issues Exchange Server SP1 Release Update 9 fixes959559 - Transaction log files grow unexpectedly in an Exchange Server 2007 Service Pack 1 mailbox server on a computer that is running Windows Server 2008 925252 - The Store.exe process uses almost 100 percent of CPU resources, and the size of the public folder store increases quickly in Exchange Server 2007 961124 - Some messages are stuck in the Outbox folder or the Drafts folder on a computer that is running Exchange Server 2007 Service Pack 1 970725 - Public folder replication messages stay in the local delivery queue and cause an Exchange Server 2007 Service Pack 1 database to grow quickly SP1 Release Update 8 fixes960775 - You receive a "Message too large for this recipient" NDR that has the original message attached after you restrict the Maximum Message Send Size value in Exchange Server 2007 SP1 Release Update 7 fixes957124 - You do not receive an NDR message even though your meeting request cannot be sent successfully to a recipient 960775 - You receive a "Message too large for this recipient" NDR that has the original message attached after you restrict the Maximum Message Send Size value in Exchange Server 2007 SP1 Release Update 1 fixes947014 - An Exchange Server 2007 mailbox server randomly generates many transaction logs in an Exchange Server 2007 Service Pack 1 environment 943371 - Event IDs 8206, 8213, and 8199 are logged in an Exchange Server 2007 environment Outlook 2007970944 – Installing this hotfix package addresses and issue where log files are generated unexpectedly when a user is running Outlook 2007 in the cached Exchange mode and sends an e-mail message to the recipients who have a corrupted e-mail address and/or e-mail address  Outlook 2003958701 - Description of the Outlook 2003 Post-Service Pack 3 hotfix package (Engmui.msp, Olkintl.msp, Outlook.msp): October 28, 2008 936184 - Description of the Outlook 2003 post-Service Pack 3 hotfix package: December 14, 2007 897247 - Description of the Microsoft Office Outlook 2003 post-Service Pack 1 hotfix package: May 2, 2005 Entourage935848 - Various performance issues occur when you use Entourage for Mac to send large e-mail messages to an Exchange 2007 server Windows 2008955612 - The "LCMapString" function may return incorrect mapping results for some languages in Windows Server 2008 and in Windows Vista ... 全文

Exchange Database Log Store troubleshooting

Connections could not be acquired from the underlying database

从基础数据库,无法获取连接,无法连接到数据库。严重: StandardWrapper.Throwable... 全文

mysql database 数据库 exception 密码

使用SQLServer Audit来监控触发器的启用、禁用情况

 使用情景:             有时候会发现在触发器中的业务逻辑没有执行,可能是因为触发器的逻辑错误所引起的。但是有时候却是因为一些触发器被禁用了。 ... 全文

database Database sqlserver SQLServer

Windows7安装Oracle database lite 10g的错误解决

Windows7安装Oracle database lite 10g,运行setup.exe后出现以下错误,错误信息如下:... 全文

Windows7 安装Oracle database lite 10g

sse_utf8.dbf and generate new database

Problem:Executing  “Generate New Database” Job resulted in following error.SBL-GDB-00004: Error in Main function... 全文

sse_utf8.dbf and generate new database

database software runInstaller无法看到所有的rac节点的处理方法

最近遇到一个问题:rhel5.5下 安装11.2.0.4的rac。GI安装完了没问题。但是 database software  runInstaller安装时,所有的节点在图形化界面中看不到。搜索mos,找到如下的文章:Database runInstaller "Nodes Selection" Window Does not Show Cluster Nodes (Doc ID 1327486.1)按照该文章中的指示操作... 全文

Database runInstalle not show rac node

Oracle Database 11g 相关资源 Oracle官方网站原版

  经网友建议,提供常用试验用资源。以下软件或系统仅为完成本博客内的各种实验而提供下载。所有软件、系统均为该软件发布方提供的原版文件,未经任何修改、破解等操作。使用目的仅限于学习、测试及实验,符合国家相关法律规定。  如您认为拥有以下资源的版权,且共享文件的行为违反了相关法律或您的权益。请在下面留言,我会尽快处理。  以下资源下载链接为官方网站资源,由于原有资源需要认证才能下载,请使用迅雷或QQ旋风下载,避免需要认证的情况。适用于 Microsoft Windows(32 位)的 Oracle Database 11g 第 2 版 (11.2.0.1.0)... 全文

Oracle Database 11g 官方 原版 下载

2 3 4 5 6 7 8 9 10 11