Friday, 1 November 2013
Drawing Scale Changed
One day, you may find the pipeline what you put on which was on grid. but is moved over night. There could be Admin. did something, or... something we don't know. Let's talk about the solution: At first, check the other "good" drawing's scale. Go to View--Property--Grid, then write down the scale number. see the attached picture. Then, go to your drawing which all lines moved. match the scale with the good one.
Labels:
SPPID Solution
Friday, 13 September 2013
ORA-00018 maximum number of sessions exceeded
ORA-00018 maximum number of sessions exceeded
Cause: All session state objects are in use.
Action: Increase the value of the SESSIONS initialization parameter.Reference: Oracle Documentation
ORA-00018 comes under "Oracle Database Server Messages". These messages are generated
by the Oracle database server when running any Oracle program.
How to increase SESSION initialization parameter:
1. Login as sysdba
sqlplus / as sysdba
2. Check Current Setting of Parameters
sql> show parameter sessions
sql> show parameter processes
sql> show parameter transactions
3. If you are planning to increase "sessions" parameter you should also plan to increase
"processes and "transactions" parameters.
A basic formula for determining these parameter values is as follows:
processes=x
sessions=x*1.1+5
transactions=sessions*1.1
4. These paramters can't be modified in memory. You have to modify the spfile only
(scope=spfile) and bounce the instance.
sql> alter system set processes=500 scope=spfile;
sql> alter system set sessions=555 scope=spfile;
sql> alter system set transactions=610 scope=spfile;
sql> shutdown abort
sql> startup Related Links:
ORA-00054: resource busy and acquire with NOWAIT specified
http://nimishgarg.blogspot.in/2012/05/ora-00054-resource-busy-and-acquire.html
ORA-00020 maximum number of processes exceeded
http://nimishgarg.blogspot.in/2012/05/ora-00020-maximum-number-of-processes.html
Original Article
Monday, 22 July 2013
OPC text missing the position
There is an interet thing happy since last week.
the "..." control the directiong of the OPC. after I changed it to left. the text missing position.
Put a new one, same thing happed.
Put a new one, same thing happed.
Labels:
SPPID Solution
Friday, 15 March 2013
Site Server Permition_SPPID Installation
I finished Whole process of SPPID installation. and it works well. but after I shared my remote computer with my co-workers, I found they can not open the drawings.
After asked the master, I know there is one more step need to do: right click your Site Server. then click the permissions. add your co-worker's name on it. then refresh the role in Engineering Manager. It work!
The logic is clear: your co-worker need permission to access the reference files. Otherwise, they only can connect with server but doing nothing.
After asked the master, I know there is one more step need to do: right click your Site Server. then click the permissions. add your co-worker's name on it. then refresh the role in Engineering Manager. It work!
The logic is clear: your co-worker need permission to access the reference files. Otherwise, they only can connect with server but doing nothing.
Labels:
SPPID Solution
Friday, 22 February 2013
How to get remote VM's shared files
There're several method:
1. Access to the VM directly.
Type \\ the name of VM\ driver$. you can directly get into the shared folder. of course, you need to know this computer's user name and password.
2. Map the shared folder to your computer, similiar as above. more convinient for next time.
3. From the VM go to the server, for example: \\ca-cab...., you suppose have your personal folder there. then put the VM file there. After that you can get your files from your personal folder.
1. Access to the VM directly.
Type \\ the name of VM\ driver$. you can directly get into the shared folder. of course, you need to know this computer's user name and password.
2. Map the shared folder to your computer, similiar as above. more convinient for next time.
3. From the VM go to the server, for example: \\ca-cab...., you suppose have your personal folder there. then put the VM file there. After that you can get your files from your personal folder.
Labels:
SPPID Solution
How to add your co-worker to your VM
Two point:
1. Add their user name to your VM.
(a) my computer--Manage
2. Add their user name to remote
(a) Contro Panel--System--Remote
1. Add their user name to your VM.
(a) my computer--Manage
(b) groups--users--property
(c) add
(d) check the dormin name first, then enter your co-workers user name.
2. Add their user name to remote
(a) Contro Panel--System--Remote
(b) click select remote users
(c) check domin first, then add user name, then check names
Now, you can tell your co-workers to try.
Labels:
SPPID Solution
Friday, 11 January 2013
XenApp more
XenApp, Virtually Everywhere
This post is apart from Smartplant Automation. But is an imortant application used for virtualization of desktop applications. Here I'll discuss about CITRIX XenApp.
Citrix XenApp is an application delivery solution for application managed in the datacenter virtualized and centralised and delivers the service to the user anywhere and on any device.
Now if the upgrades require in upgrade of OS (MS Window), or (very rare case) Hardware of systems running the application tool, and we have around 50-100 computers to be upgraded, then it will definitely be a friction for project time as well as cost.
Citrix here plays a very good role to limit the cost and time. If the Application (here Smartplant Tool) is running on CITRIX XenApp, you'll just need to upgrade one server. That's it. You can fire the applications just in single upgrade.
And the tool amdinistrators will support such environment. On local installed application, if any thing needs to be changed (suppose new ItemTag.dll in case of SPPID), it will be applied on each system. But for CITRTX server, change it for one machine and it will be available on every machine.
Hope that is enough for you to understand the key benifit of virtualized, centralized application management system.
Have a Happy Life...
Citrix XenApp is an application delivery solution for application managed in the datacenter virtualized and centralised and delivers the service to the user anywhere and on any device.
Let us view the benefits of Citrix XenApp with smartplant tools. We all know that any EPC project has a minimum life of 3 years upto plant is built (please ignore my lack of knowledge if it is done in shorter duration). In this duration Intergraph release many upgrades for their tool (Including service packs or version). The clients demands that every thing should be done on updated versions of Smartplant tools. |
Citrix here plays a very good role to limit the cost and time. If the Application (here Smartplant Tool) is running on CITRIX XenApp, you'll just need to upgrade one server. That's it. You can fire the applications just in single upgrade.
And the tool amdinistrators will support such environment. On local installed application, if any thing needs to be changed (suppose new ItemTag.dll in case of SPPID), it will be applied on each system. But for CITRTX server, change it for one machine and it will be available on every machine.
Hope that is enough for you to understand the key benifit of virtualized, centralized application management system.
Have a Happy Life...
Labels:
SPPID Solution
Wednesday, 9 January 2013
Tuesday, 8 January 2013
What’s the optimal XenApp 6.5 VM configuration?
In this blog series I’m taking a look at scalability considerations for XenApp 6.5, specifically:
In an ideal world, every project would include time for scalability testing so that the right number of optimally specified servers can be ordered. However, there are various reasons why this doesn’t always take place, including time and budgetary constraints. Architects are all too often asked for their best guess on the resources required. I’ve been in this situation myself and I know just how stressful it can be. If you over specify you’re going to cost your company money whilst under specifying reduces the number of users that can be supported, or even worse – impacts performance.
XenApp Server Virtual Machine Processor Specification
In most situations, testing has shown that optimal scalability is obtained when 4 virtual CPUs are assigned to each virtual machine. When hosting extremely resource intensive applications, such as computer aided design or software development applications, user density can sometimes be improved by assigning 6 or even 8 virtual CPUs to each virtual machine. However, in these situations consider using XenDesktop rather than XenApp so that you have a granular level of control over the resources that are assigned to each user.
User Density per XenApp Server Virtual Machine:
The user density of each 4vCPU virtual XenApp server will vary according to the workloads that they support and the processor architecture of the virtualization host:
Dual Socket Host -
Number of XenApp Servers per Virtualization Host
When determining the optimal number of virtual XenApp servers per virtualization host, divide the total number of virtual cores by the number of virtual processors assigned to each XenApp virtual machine (typically 4). For example, a server with 32 virtual cores should host 8 virtual XenApp servers (32 / 4 = 8). There is no need to remove server cores for the hypervisor because this overhead has been baked into the user density overheads discussed in the first blog.
One of the questions I get asked most is whether the total number of virtual cores includes hyper-threading or not. First, what is hyper-threading and what does it do?
The Citrix XenDesktop and XenApp Best Practices whitepaper states:
A common mistake is to perform scalability testing with one virtual XenApp server and to multiply the results by the number of virtual machines that the host should be able to support. For example, scalability testing might show that a single XenApp server virtual machine can support 60 concurrent ‘Normal’ users. Therefore, a 16 physical core server with 8 XenApp server virtual machines should be able to support 480 concurrent users. This approach always overestimates user density because the number of users per virtual machine decreases with each additional XenApp VM hosted on the virtualization server. The optimal number of concurrent normal users for a 16 physical core server will be approximately 192 with around 24 users per virtual machine.
Memory Specification
The amount of memory assigned to each virtual machine varies according to the memory requirements of the workload(s) that they support. As a general rule of thumb, memory requirements should be calculated by multiplying the number of light users by 341MB, normal users by 512MB and heavy users by 1024MB (this number includes operating system overhead). Therefore, each virtual machine hosted on a dual socket host should typically be assigned 12GB of RAM and each virtual machine hosted on a quad socket host should be assigned 10GB of RAM.
The following table shows typical memory specifications for each processor specification:
Don’t forget to allocate memory for the Hypervisor. With XenServer, this is 752MB by default.
Depending on hardware costs, it may make sense to reduce the number of XenApp server virtual machines per virtualization host. For example, instead of purchasing a virtualization host with 80 virtual cores and 256GB of memory you could reduce the number of XenApp server virtual machines per host from 20 to 19 so that only 192GB of memory will be necessary (2GB for the hypervisor). Although this reduces user density by approximately 30 light /20 normal/10 heavy users per host, it also saves 64GB of memory.
Depending on the hardware specification selected, you may find that your hardware specification allows you to assign more than 12GB of memory to each XenApp virtual machine. It makes sense to use all of the memory available.
Disk Input Output Operations per Second (IOPS)
Regardless of whether local or shared storage is used, the storage subsystem must be capable of supporting the anticipated number of IOPS. As a general rule of thumb, each light user requires an average of 2 steady state IOPS, each normal user requires an average of 4 steady state IOPS and each heavy user requires an average of 8 steady state IOPS. Therefore:
Dual Socket Host -
I realize that some of these recommendations can be hard to follow for first timers which is why the last post in this series will walk you through an example XenApp sizing exercise. Stay tuned!
For more information on recommended best practices when virtualizing Citrix XenApp, please refer to CTX129761 – Virtualization Best Practices.
Andy Baker – Architect
Worldwide Consulting Solutions
Twitter: @adwbaker
XenDesktop Design Handbook
Project Accelerator
- How to estimate XenApp 6.5 Hosted Shared Desktop scalability
- What’s the optimal XenApp 6.5 VM specification?
- Hosted Shared Desktop sizing example
In an ideal world, every project would include time for scalability testing so that the right number of optimally specified servers can be ordered. However, there are various reasons why this doesn’t always take place, including time and budgetary constraints. Architects are all too often asked for their best guess on the resources required. I’ve been in this situation myself and I know just how stressful it can be. If you over specify you’re going to cost your company money whilst under specifying reduces the number of users that can be supported, or even worse – impacts performance.
XenApp Server Virtual Machine Processor Specification
In most situations, testing has shown that optimal scalability is obtained when 4 virtual CPUs are assigned to each virtual machine. When hosting extremely resource intensive applications, such as computer aided design or software development applications, user density can sometimes be improved by assigning 6 or even 8 virtual CPUs to each virtual machine. However, in these situations consider using XenDesktop rather than XenApp so that you have a granular level of control over the resources that are assigned to each user.
User Density per XenApp Server Virtual Machine:
The user density of each 4vCPU virtual XenApp server will vary according to the workloads that they support and the processor architecture of the virtualization host:
- Dual Socket Host: You should expect approximately 36 light users, 24 normal users or 12 heavy users per XenApp virtual machine.
- Quad Socket Host: You should expect approximately 30 light users, 20 normal users or 10 heavy users per XenApp virtual machine.
Dual Socket Host -
- 30% light : (36 / 100%) * 30 = 11
- 60% normal : (24 / 100%) x 60 = 14 users
- 10% heavy : (12 / 100%) * 10 = 1 user
Number of XenApp Servers per Virtualization Host
When determining the optimal number of virtual XenApp servers per virtualization host, divide the total number of virtual cores by the number of virtual processors assigned to each XenApp virtual machine (typically 4). For example, a server with 32 virtual cores should host 8 virtual XenApp servers (32 / 4 = 8). There is no need to remove server cores for the hypervisor because this overhead has been baked into the user density overheads discussed in the first blog.
One of the questions I get asked most is whether the total number of virtual cores includes hyper-threading or not. First, what is hyper-threading and what does it do?
The Citrix XenDesktop and XenApp Best Practices whitepaper states:
Hyper-threading is a technology developed by Intel that enables a single physical processor to appear as two logical processors. Hyper-threading has the potential to improve the performance of workloads by increasing user density per VM (XenApp only) or VM density per host (XenApp and XenDesktop). For other types of workloads, it is critical to test and compare the performance of workloads with Hyper-threading and without Hyper-threading. In addition, Hyper-threading should be configured in conjunction with the vendor-specific hypervisor tuning recommendations. It is highly recommended to use new generation server hardware and processors (e.g. Nehalem+) and the latest version of the hypervisors to evaluate the benefit of Hyper-threading. The use of hyper-threading will typically provide a performance boost of between 20-30%.Testing has shown that optimal density is obtained when the total number of virtual cores includes hyper-threading. For example, a server with 16 physical cores should host 8 XenApp VMs – 32 virtual cores (16 physical cores x 2) / 4 virtual CPUs per XenApp virtual machine = 8 XenApp virtual machines.
A common mistake is to perform scalability testing with one virtual XenApp server and to multiply the results by the number of virtual machines that the host should be able to support. For example, scalability testing might show that a single XenApp server virtual machine can support 60 concurrent ‘Normal’ users. Therefore, a 16 physical core server with 8 XenApp server virtual machines should be able to support 480 concurrent users. This approach always overestimates user density because the number of users per virtual machine decreases with each additional XenApp VM hosted on the virtualization server. The optimal number of concurrent normal users for a 16 physical core server will be approximately 192 with around 24 users per virtual machine.
Memory Specification
The amount of memory assigned to each virtual machine varies according to the memory requirements of the workload(s) that they support. As a general rule of thumb, memory requirements should be calculated by multiplying the number of light users by 341MB, normal users by 512MB and heavy users by 1024MB (this number includes operating system overhead). Therefore, each virtual machine hosted on a dual socket host should typically be assigned 12GB of RAM and each virtual machine hosted on a quad socket host should be assigned 10GB of RAM.
The following table shows typical memory specifications for each processor specification:
Don’t forget to allocate memory for the Hypervisor. With XenServer, this is 752MB by default.
Depending on hardware costs, it may make sense to reduce the number of XenApp server virtual machines per virtualization host. For example, instead of purchasing a virtualization host with 80 virtual cores and 256GB of memory you could reduce the number of XenApp server virtual machines per host from 20 to 19 so that only 192GB of memory will be necessary (2GB for the hypervisor). Although this reduces user density by approximately 30 light /20 normal/10 heavy users per host, it also saves 64GB of memory.
Depending on the hardware specification selected, you may find that your hardware specification allows you to assign more than 12GB of memory to each XenApp virtual machine. It makes sense to use all of the memory available.
Disk Input Output Operations per Second (IOPS)
Regardless of whether local or shared storage is used, the storage subsystem must be capable of supporting the anticipated number of IOPS. As a general rule of thumb, each light user requires an average of 2 steady state IOPS, each normal user requires an average of 4 steady state IOPS and each heavy user requires an average of 8 steady state IOPS. Therefore:
Dual Socket Host -
- Light users: 36 users x 2 IOPS = 72 steady state IOPS per XenApp virtual machine
- Normal Users: 24 users x 4 IOPS = 96 steady state IOPS per XenApp virtual machine
- Heavy users: 12 users x 8 IOPS = 96 steady state IOPS per XenApp virtual machine
- Light users: 30 users x 2 IOPS = 60 steady state IOPS per XenApp virtual machine
- Normal Users: 20 users x 4 IOPS = 80 steady state IOPS per XenApp virtual machine
- Heavy users: 10 users x 8 IOPS = 80 steady state IOPS per XenApp virtual machine
I realize that some of these recommendations can be hard to follow for first timers which is why the last post in this series will walk you through an example XenApp sizing exercise. Stay tuned!
For more information on recommended best practices when virtualizing Citrix XenApp, please refer to CTX129761 – Virtualization Best Practices.
Andy Baker – Architect
Worldwide Consulting Solutions
Twitter: @adwbaker
XenDesktop Design Handbook
Project Accelerator
Labels:
SPPID Solution
Subscribe to:
Posts (Atom)