Praktice Inc Compliance 

HIPAA Mappings to Praktice AI Controls

Below is a list of HIPAA Safeguards and Requirements and the Praktice AI controls in place to meet those. Administrative Controls HIPAA Rule Praktice AI Control Security Management Process - 164.308(a)(1)(i) Risk Management Policy Assigned Security Responsibility - 164.308(a)(2) Roles Policy Workforce Security - 164.308(a)(3)(i) Employee Policies Information Access Management - 164.308(a)(4)(i) System Access Policy Security Awareness and Training - 164.308(a)(5)(i) Employee Policy Security Incident Procedures - 164.308(a)(6)(i) IDS Policy Contingency Plan - 164.308(a)(7)(i) Disaster Recovery Policy Evaluation - 164.308(a)(8) Auditing Policy Physical Safeguards HIPAA Rule Praktice AI Control Facility Access Controls - 164.310(a)(1) Facility and Disaster Recovery Policies Workstation Use - 164.310(b) System Access, Approved Tools, and Employee Policies Workstation Security - 164.310(‘c’) System Access, Approved Tools, and Employee Policies Device and Media Controls - 164.310(d)(1) Disposable Media and Data Management Policies Technical Safeguards HIPAA Rule Praktice AI Control Access Control - 164.312(a)(1) System Access Policy Audit Controls - 164.312(b) Auditing Policy Integrity - 164.312('c’)(1) System Access, Auditing, and IDS Policies Person or Entity Authentication - 164.312(d) System Access Policy Transmission Security - 164.312(e)(1) System Access and Data Management Policy Organizational Requirements HIPAA Rule Praktice AI Control Business Associate Contracts or Other Arrangements - 164.314(a)(1)(i) Business Associate Agreements and 3rd Parties Policies Policies and Procedures and Documentation Requirements HIPAA Rule Praktice AI Control Policies and Procedures - 164.316(a) Policy Management Policy Documentation - 164.316(b)(1)(i) Policy Management Policy HITECH Act - Security Provisions HIPAA Rule Praktice AI Control Notification in the Case of Breach - 13402(a) and (b) Breach Policy Timelines of Notification - 13402(d)(1) Breach Policy Content of Notification - 13402(f)(1) Breach Policy

Approved Tools Policy

Praktice AI utilizes a suite of approved software tools for internal use by workforce members. These software tools are either self-hosted, with security managed by Praktice AI, or they are hosted by a Subcontractor with appropriate business associate agreements in place to preserve data integrity. Use of other tools requires approval from Praktice AI leadership. 20.1 List of Approved Tools GitHub. GitHub is a hosted service built on top of Git, the version control platform. GitHub is utilized for storage and change contorl for our HIPAA policies, configuration scripts and other infrastructure automation tools, as well as for source and version control of application code used by Praktice AI. Google Apps. Google Apps is used for email and document collaboration inside of the Company and with our business partners. Google Drive is used for storage of files and sharing of files with Partners and Customers. JIRA. JIRA is used for planning our software development and devOps activities, configuration management and to generate artifacts for compliance procedures. Travis. Travis is a continuous integration tool that is used automatically run tests, enforce coding conventions (linting), check for code vulnerabilities, build docker containers, and deploy to our staging and production environments. Snyk. Snyk is a source code security checker that regularly scans our source code and its many open source dependencies for version upgrades, known vulnerabilities and available patches. Amplitude. Amplitude is a hosted analytics and event tracking software that helps us understand (anonymously) how users are interacting with the Praktice AI system. Slack. Slack is a hosted messaging and team collaboration tool we use to communicate internally. No PHI, passwords or other security-related information should ever be posted on Slack. KeeperSecurity. KeeperSecurity is a centrally hosted password management tool we use to manage and share credentials internally. This includes the KeeperSecurity browser plug-in, the only approved form-fill/password manager for web browsers to access Praktice AI systems. ESET or Microsoft Anti-Virus. Anti-virus software is used to protect our workstations against infections with malicious software, incl. computer viruses, ransom-ware or other malware. 20.2 List of Forbidden Tools Remote access servers that allow external users to connect to workstations accessing Praktice AI systems (unless previously approved by the Security Officer) Browser plug-ins in profiles used to access Praktice AI systems (unless explicitly whitelisted by the Security Officer) BitTorrent or other file-sharing clients Non-standard operating systems or modifications to the operating system kernel 21. 3rd Party Policy Praktice AI makes every effort to assure all 3rd party organizations are compliant and do not compromise the integrity, security, and privacy of Praktice AI or Praktice AI Customer data. 3rd Parties include Customers, Partners, Subcontractors, and Contracted Developers. 21.1 Applicable Standards 21.1.1 Applicable Standards from the HITRUST Common Security Framework 05.i - Identification of Risks Related to External Parties 05.k - Addressing Security in Third Party Agreements 09.e - Service Delivery 09.f - Monitoring and Review of Third Party Services 09.g - Managing Changes to Third Party Services 10.1 - Outsourced Software Development 21.1.2 Applicable Standards from the HIPAA Security Rule 164.314(a)(1)(i) - Business Associate Contracts or Other Arrangements 21.2 Policies to Assure 3rd Parties Support Praktice AI Compliance Praktice AI only allows 3rd party access to production systems containing ePHI after careful vetting, training in Praktice AI’s policies and signing of a Business Associate Agreement for subcontractors. This applies to companies and individual subcontractors alike. Access is granted, documented and removed using the same procedures as access requests for employees. All connections and data in transit between the Praktice AI Platform and 3rd parties are encrypted end to end. A standard business associate agreement with Customers and Partners is defined and includes the required security controls in accordance with the organization’s security policies. Additionally, responsibility is assigned in these agreements. Praktice AI has Service Level Agreements (SLAs) with Subcontractors with an agreed service arrangement addressing liability, service definitions, security controls, and aspects of services management. Subcontractors must coordinate, manage, and communicate any changes to services provided to Praktice AI. Changes to 3rd party services are classified as configuration management changes and thus are subject to the policies and procedures described in §9; substantial changes to services provided by 3rd parties will invoke a Risk Assessment as described in §4.2. Praktice AI utilizes monitoring tools to regularly evaluate Subcontractors against relevant SLAs. No Praktice AI Customers or Partners have access outside of their own environment, meaning they cannot access, modify, or delete anything related to other 3rd parties. Praktice AI maintains and annually reviews a list of all current Partners and Subcontractors. The list of current Partners and Subcontractors is maintained by the Praktice AI Privacy Officer, includes details on all provided services (along with contact information), and is recorded in §1.4. The annual review of Partners and Subcontractors is conducted as a part of the security, compliance, and SLA review referenced below. Praktice AI assesses security, compliance, and SLA requirements and considerations with all Partners and Subcontractors. This includes annual assessment of SOC2 Reports for all Praktice AI infrastructure partners. Praktice AI leverages recurring calendar invites to assure reviews of all 3rd party services are performed annually. These reviews are performed by the Praktice AI Security Officer and Privacy Officer. The process for reviewing 3rd party services is outlined below: The Security Officer initiates the SLA review by creating an Issue in the JIRA Compliance Review Activity (CRA) Project. The Security Officer, or Privacy Officer, is assigned to review the SLA and performance of 3rd parties. The list of current 3rd parties, including contact information, is also reviewed to assure it is up to date and complete. SLA, security, and compliance performance is documented in the Issue. Once the review is completed and documented, the Security Officer approves or rejects the Issue. If the Issue is rejected, it goes back for further review and documentation. Regular review is conducted as required by SLAs to assure security and compliance. These reviews include reports, audit trails, security events, operational issues, failures and disruptions, and identified issues are investigated and resolved in a reasonable and timely manner. Any changes to Partner and Subcontractor services and systems are reviewed before implementation. For all partners, Praktice AI reviews activity annually to assure partners are in line with SLAs in contracts with Praktice AI. SLA review is monitored on an annual basis using JIRA reporting to assess compliance with above policy.

Employees Policy

Praktice AI is committed to ensuring all workforce members actively address security and compliance in their roles at Praktice AI. As such, training is imperative to assuring an understanding of current best practices, the different types and sensitivities of data, and the sanctions associated with non-compliance. 19.1 Applicable Standards 19.1.1 Applicable Standards from the HITRUST Common Security Framework 02.e - Information Security Awareness, Education, and Training 06.e - Prevention of Misuse of Information Assets 07.c - Acceptable Use of Assets 09.j - Controls Against Malicious Code 01.y - Teleworking 19.1.2 Applicable Standards from the HIPAA Security Rule 164.308(a)(5)(i) - Security Awareness and Training 19.2 Employment Policies All new workforce members, including contractors, are given training on security policies and procedures, including operations security, within 30 days of employment. Records of training are kept for all workforce members. Upon completion of training, workforce members complete and sign the training acknowledgement form. Current Praktice AI training documents are available in Praktice AI’s Training folder shared on Google Drive. Employees must complete this training before accessing production systems containing ePHI. All workforce members are granted access to formal organizational policies, which include the sanction policy for security violations. The Praktice AI Employee Handbook clearly states the responsibilities and acceptable behavior regarding information system usage, including rules for email, Internet, and social media usage. Workforce members are required to sign an agreement stating that they have read and will abide by all terms outlined in the Praktice AI Employee Handbook, along with all policies and processes described in this document. A Human Resources representative will provide the agreement to new employees during their onboarding process. Praktice AI does not allow mobile devices to connect to any of its production networks. All workforce members are educated about the approved set of tools to be installed on workstations. All new workforce members are given HIPAA training within 30 days of beginning employment. Training includes HIPAA reporting requirements, including the ability to anonymously report security incidents, and the levels of compliance and obligations for Praktice AI and its Customers and Partners. Current Praktice AI training documents are available in Praktice AI’s Training folder shared on Google Drive. All remote (teleworking) workforce members are trained on the risks, the controls implemented, their responsibilities, and sanctions associated with violation of policies. Employees may only use Praktice AI-vetted workstations for accessing production systems with access to ePHI data. Any workstations used to access production systems must be configured as prescribed in §7.8. Any workstations used to access production systems must have firewalls and virus protection software installed, configured, and enabled. Praktice AI may monitor access and activities of all users on workstations and production systems in order to meet auditing policy requirements (§8). Access to internal Praktice AI systems can be requested using the procedures outlined in §7.2. All requests for access must be granted by the Praktice AI Security Officer. Request for modifications of access for any Praktice AI employee can be made using the procedures outlined in §7.2. Praktice AI employees are strictly forbidden from downloading any ePHI to their workstations. Restricting transfers of ePHI is enforced through technical controls as described in §7.13. Employees found to be in violation of this policy will be subject to sanctions as described in §5.3.3. Employees are required to cooperate with federal and state investigations. Employees must not interfere with investigations through willful misrepresentation, omission of facts, or by the use of threats against any person. Employees found to be in violation of this policy will be subject to sanctions as described in §5.3.3. 19.3 Issue Escalation Praktice AI workforce members are to escalate issues using the procedures outlined in the Employee Handbook. Issues that are brought to the Escalation Team are assigned an owner. The membership of the Escalation Team is maintained by the Chief Executive Officer. Security incidents, particularly those involving ePHI, are handled using the process described in §11.2. If the incident involves a breach of ePHI, the Security Officer will manage the incident using the process described in §12.2. Refer to §11.2 for a list of sample items that can trigger Praktice AI’s incident response procedures; if you are unsure whether the issue is a security incident, contact the Security Officer immediately. It is the duty of that owner to follow the process outlined below: Create an Issue in the JIRA Compliance Review Activity (CRA) Project. The Issue is investigated, documented, and, when a conclusion or remediation is reached, it is moved to Review. The Issue is reviewed by another member of the Escalation Team. If the Issue is rejected, it goes back for further evaluation and review. If the Issue is approved, it is marked as Done, adding any pertinent notes required. The workforce member that initiated the process is notified of the outcome via email.

Data Integrity Policy

Praktice AI takes data integrity very seriously. As stewards and partners of Praktice AI Customers, we strive to assure data is protected from unauthorized access and that it is available when needed. The following policies drive many of our procedures and technical settings in support of the Praktice AI mission of data protection. Production systems that create, receive, store, or transmit Customer data (hereafter “Production Systems”) must follow the guidelines described in this section. 17.1 Applicable Standards 17.1.1 Applicable Standards from the HITRUST Common Security Framework 10.b - Input Data Validation 17.1.2 Applicable Standards from the HIPAA Security Rule 164.308(a)(8) - Evaluation 17.2 Disabling Non-Essential Services All Production Systems must disable services that are not required to achieve the business purpose or function of the system. 17.3 Monitoring Log-in Attempts All access to Production Systems must be logged. This is done following the Praktice AI Auditing Policy. 17.4 Prevention of Malware on Production Systems All Production Systems must have OSSEC running, and set to scan system every 2 hours and at reboot to assure no malware is present. Detected malware is evaluated and removed. All Production Systems are to only be used for Praktice AI business needs. 17.5 Patch Management Software patches and updates will be applied to all systems in a timely manner. In the case of routine updates, they will be applied after thorough testing. In the case of updates to correct known vulnerabilities, priority will be given to testing to speed the time to production. Critical security patches are applied within 30 days from testing and all security patches are applied within 90 days after testing. Administrators subscribe to mailing lists to assure up to date on current version of all Praktice AI managed software on Production Systems. 17.6 Intrusion Detection and Vulnerability Scanning Production systems are monitored using IDS systems using Wazuh/OSSEC. Suspicious activity is logged and alerts are generated. Vulnerability scanning of Production Systems must occur on a predetermined, regular basis, no less than annually. Scans are reviewed by Security Officer, with defined steps for risk mitigation, and retained for future reference. 17.7 Production System Security System, network, and server security is managed and maintained by the Head of Technology and the Security Officer. Up to date system lists and architecture diagrams are kept for all production environments. Access to Production Systems is controlled using centralized tools. 17.8 Production Data Security Reduce the risk of compromise of Production Data. Implement and/or review controls designed to protect Production Data from improper alteration or destruction. Ensure that confidential data is stored in a manner that supports user access logs and automated monitoring for potential security incidents. Ensure Praktice AI Customer Production Data is segmented and only accessible to Customer authorized to access data. All Production Data at rest is stored on encrypted volumes using encryption keys managed by Praktice AI. Encryption at rest is ensured through the use of automated deployment scripts referenced in the Configuration Management Policy. Volume encryption keys and machines that generate volume encryption keys are protected from unauthorized access. Volume encryption key material is protected with access controls such that the key material is only accessible by privileged accounts. Encrypted volumes use AES encryption with a minimum of 256-bit keys, or keys and ciphers of equivalent or higher cryptographic strength. 17.9 Transmission Security All data transmission is encrypted end to end using encryption keys managed by Praktice AI. Encryption is not terminated at the network end point, and is carried through to the application. Transmission encryption keys and machines that generate keys are protected from unauthorized access. Transmission encryption key material is protected with access controls such that the key material is only accessible by privileged accounts. Transmission encryption keys use a minimum of 4096-bit RSA keys, or keys and ciphers of equivalent or higher cryptographic strength (e.g., 256-bit AES session keys in the case of IPsec encryption). Transmission encryption keys are limited to use for one year and then must be regenerated. In the case of Praktice AI provided APIs, we provide mechanisms to assure person sending or receiving data is authorized to send and save data.

Vulnerability Scanning Policy

Praktice AI is proactive about information security and understands that vulnerabilities need to be monitored on an ongoing basis. Praktice AI utilizes Nessus Scanner from Tenable to consistently scan, identify, and address vulnerabilities on our systems. We also utilize OSSEC on all systems, including logs, for file integrity checking and intrusion detection. 16.1 Applicable Standards 16.1.1 Applicable Standards from the HITRUST Common Security Framework 10.m - Control of Technical Vulnerabilities 16.1.2 Applicable Standards from the HIPAA Security Rule 164.308(a)(8) - Evaluation 16.2 Vulnerability Scanning Policy Nessus management is performed by the Praktice AI Security Officer with assistance from a designated employee. Nessus is used to monitor all internal IP addresses (servers, VMs, etc) on Praktice AI networks. Frequency of scanning is as follows: on a weekly basis; after every production deployment. Reviewing Nessus reports and findings, as well as any further investigation into discovered vulnerabilities, are the responsibility of the Praktice AI Security Officer. The process for reviewing Nessus reports is outlined below: The Security Officer initiates the review of a Nessus Report by creating an Issue in the JIRA Compliance Review Activity (CRA) Project. The Security Officer, or a designated employee assigned by the Security Officer, is assigned to review the Nessus Report. If new vulnerabilities are found during review, the process below is used to test those vulnerabilities is outlined below. Once those steps are completed, the Issue is then reviewed again. Once the review is completed, the Security Officer approves or rejects the Issue. If the Issue is rejected, it goes back for further review. If the review is approved, the Security Officer then marks the Issue as Done, adding any pertinent notes required. In the case of new vulnerabilities, the following steps are taken: All new vulnerabilities are verified manually to assure they are repeatable. Those not found to be repeatable are manually tested after the next vulnerability scan, regardless of if the specific vulnerability is discovered again. Vulnerabilities that are repeatable manually are documented and reviewed by the Security Officer, Head of Technology, and Privacy Officer to see if they are part of the current risk assessment performed by Praktice AI. Those that are a part of the current risk assessment are checked for mitigations. Those that are not part of the current risk assessment trigger a new risk assessment, and this process is outlined in detail in the Praktice AI Risk Assessment Policy. All vulnerability scanning reports are retained for 6 years by Praktice AI. Vulnerability report review is monitored on a quarterly basis using JIRA reporting to assess compliance with above policy. The Praktice AI Security Officer decides on the frequency and scope of penetration testing as part of the regular risk assessment process. External penetration testing is performed by a third party as deemed reasonable and appropriate by the Security Officer in the risk assessment. Internal penetration testing is performed quarterly. Below is the process used to conduct internal penetration tests. The Security Officer initiates the penetration test by creating an Issue in the JIRA Compliance Review Activity (CRA) Project. The Security Officer, or a designated employee assigned by the Security Officer, is assigned to conduct the penetration test. Gaps and vulnerabilities identified during penetration testing are reviewed, with plans for correction and/or mitigation, by the Praktice AI Security Officer before the Issue can move to be approved. Once the testing is completed, the Security Officer approves or rejects the Issue. If the Issue is rejected, it goes back for further testing and review. If the Issue is approved, the Security Officer then marks the Issue as Done, adding any pertinent notes required. Penetration tests results are retained for 6 years by Praktice AI. Internal penetration testing is monitored on an annual basis using JIRA reporting to assess compliance with above policy. This vulnerability policy is reviewed on a quarterly basis by the Security Officer and Privacy Officer.

Disposable Media Policy

Praktice AI recognizes that media containing ePHI may be reused when appropriate steps are taken to ensure that all stored ePHI has been effectively rendered inaccessible. Destruction/disposal of ePHI shall be carried out in accordance with federal and state law. The schedule for destruction/disposal shall be suspended for ePHI involved in any open investigation, audit, or litigation. Praktice AI utilizes dedicated hardware from Subcontractors. ePHI is only stored on SSD volumes in our hosted environment. All SSD volumes utilized by Praktice AI and Praktice AI Customers are encrypted. Praktice AI does not use, own, or manage any mobile devices, SD cards, or tapes that have access to ePHI. 14.1 Applicable Standards 14.1.1 Applicable Standards from the HITRUST Common Security Framework 0.9o - Management of Removable Media 14.1.2 Applicable Standards from the HIPAA Security Rule 164.310(d)(1) - Device and Media Controls 14.2 Disposable Media Policy All removable media is restricted, audited, and is encrypted. Praktice AI assumes all disposable media in its Platform may contain ePHI, so it treats all disposable media with the same protections and disposal policies. All destruction/disposal of ePHI media will be done in accordance with federal and state laws and regulations and pursuant to the Praktice AI’s written retention policy/schedule. Records that have satisfied the period of retention will be destroyed/disposed of in an appropriate manner. Records involved in any open investigation, audit or litigation should not be destroyed/disposed of. If notification is received that any of the above situations have occurred or there is the potential for such, the record retention schedule shall be suspended for these records until such time as the situation has been resolved. If the records have been requested in the course of a judicial or administrative hearing, a qualified protective order will be obtained to ensure that the records are returned to the organization or properly destroyed/disposed of by the requesting party. Before reuse of any media, for example all ePHI is rendered inaccessible, cleaned, or scrubbed. All media is formatted to restrict future access. All Praktice AI Subcontractors provide that, upon termination of the contract, they will return or destroy/dispose of all patient health information. In cases where the return or destruction/disposal is not feasible, the contract limits the use and disclosure of the information to the purposes that prevent its return or destruction/disposal. Any media containing ePHI is disposed using a method that ensures the ePHI could not be readily recovered or reconstructed. The methods of destruction, disposal, and reuse are reassessed periodically, based on current technology, accepted practices, and availability of timely and cost-effective destruction, disposal, and reuse technologies and services. In the cases of a Praktice AI Customer terminating a contract with Praktice AI and no longer utilize Praktice AI Services, the following actions will be taken depending on the Praktice AI Services in use. In all cases it is solely the responsibility of the Praktice AI Customer to maintain the safeguards required of HIPAA once the data is transmitted out of Praktice AI Systems. Praktice AI will provide the customer with 30 days from the date of termination to export data.

Disaster Recovery Policy

The Praktice AI Contingency Plan establishes procedures to recover Praktice AI following a disruption resulting from a disaster. This Disaster Recovery Policy is maintained by the Praktice AI Security Officer and Privacy Officer. The following objectives have been established for this plan: Maximize the effectiveness of contingency operations through an established plan that consists of the following phases: Notification/Activation phase to detect and assess damage and to activate the plan; Recovery phase to restore temporary IT operations and recover damage done to the original system; Reconstitution phase to restore IT system processing capabilities to normal operations. Identify the activities, resources, and procedures needed to carry out Praktice AI processing requirements during prolonged interruptions to normal operations. Identify and define the impact of interruptions to Praktice AI systems. Assign responsibilities to designated personnel and provide guidance for recovering Praktice AI during prolonged periods of interruption to normal operations. Ensure coordination with other Praktice AI staff who will participate in the contingency planning strategies. Ensure coordination with external points of contact and vendors who will participate in the contingency planning strategies. This Praktice AI Contingency Plan has been developed as required under the Office of Management and Budget (OMB) Circular A-130, Management of Federal Information Resources, Appendix III, November 2000, and the Health Insurance Portability and Accountability Act (HIPAA) Final Security Rule, Section §164.308(a)(7), which requires the establishment and implementation of procedures for responding to events that damage systems containing electronic protected health information. This Praktice AI Contingency Plan is created under the legislative requirements set forth in the Federal Information Security Management Act (FISMA) of 2002 and the guidelines established by the National Institute of Standards and Technology (NIST) Special Publication (SP) 800-34, titled “Contingency Planning Guide for Information Technology Systems” dated June 2002. The Praktice AI Contingency Plan also complies with the following federal and departmental policies: The Computer Security Act of 1987; OMB Circular A-130, Management of Federal Information Resources, Appendix III, November 2000; Federal Preparedness Circular (FPC) 65, Federal Executive Branch Continuity of Operations, July 1999; Presidential Decision Directive (PDD) 67, Enduring Constitutional Government and Continuity of Government Operations, October 1998; PDD 63, Critical Infrastructure Protection, May 1998; Federal Emergency Management Agency (FEMA), The Federal Response Plan (FRP), April 1999; Defense Authorization Act (Public Law 106-398), Title X, Subtitle G, “Government Information Security Reform,” October 30, 2000 Example of the types of disasters that would initiate this plan are natural disaster, political disturbances, a zombie apocalypse, man made disaster, external human threats, internal malicious activities. Praktice AI defined two categories of systems from a disaster recovery perspective. Critical Systems. These systems host application servers and database servers or are required for functioning of systems that host application servers and database servers. These systems, if unavailable, affect the integrity of data and must be restored, or have a process begun to restore them, immediately upon becoming unavailable. Non-critical Systems. These are all systems not considered critical by definition above. These systems, while they may affect the performance and overall security of critical systems, do not prevent Critical systems from functioning and being accessed appropriately. These systems are restored at a lower priority than critical systems. 13.1 Applicable Standards 13.1.1 Applicable Standards from the HITRUST Common Security Framework 12.c - Developing and Implementing Continuity Plans Including Information Security 13.1.2 Applicable Standards from the HIPAA Security Rule 164.308(a)(7)(i) - Contingency Plan 13.2 Line of Succession The following order of succession to ensure that decision-making authority for the Praktice AI Contingency Plan is uninterrupted. The COO is responsible for ensuring the safety of personnel and the execution of procedures documented within this Praktice AI Contingency Plan. If the COO is unable to function as the overall authority or chooses to delegate this responsibility to a successor, the CEO, Head of Technology, or Medical Director shall function as that authority. To provide contact initiation should the contingency plan need to be initiated, please use the contact list below. Stefan Behrens, COO: stefan [at] Praktice AI . com Pascal Zuta, CEO: pascal [at] Praktice AI . com Kirill Kireyev, Head of Technology: kirill [at] Praktice AI . com Mokaram Rauf, Medical Director: mokaram [at] Praktice AI . com 13.3 Responsibilities The following teams have been developed and trained to respond to a contingency event affecting the IT system. The Ops Team is responsible for recovery of the Praktice AI hosted environment, network devices, and all servers. Members of the team include personnel who are also responsible for the daily operations and maintenance of Praktice AI. The team leader is the CTO and directs the Dev Ops Team. The Web Services Team is responsible for assuring all application servers, web services, and platform add-ons are working. It is also responsible for testing redeployments and assessing damage to the environment. The team leader is the CTO and directs the Web Services Team. Members of the Ops and Web Services teams must maintain local copies of the contact information from §13.2. Additionally, the CTO must maintain a local copy of this policy in the event Internet access is not available during a disaster scenario. 13.4 Testing and Maintenance The Head of Technology shall establish criteria for validation/testing of a Contingency Plan, an annual test schedule, and ensure implementation of the test. This process will also serve as training for personnel involved in the plan’s execution. At a minimum the Contingency Plan shall be tested annually (within 365 days). The types of validation/testing exercises include tabletop and technical testing. Contingency Plans for all application systems must be tested at a minimum using the tabletop testing process. However, if the application system Contingency Plan is included in the technical testing of their respective support systems that technical test will satisfy the annual requirement. 13.4.1 Tabletop Testing Tabletop Testing is conducted in accordance with the the CMS Risk Management Handbook, Volume 2. The primary objective of the tabletop test is to ensure designated personnel are knowledgeable and capable of performing the notification/activation requirements and procedures as outlined in the CP, in a timely manner. The exercises include, but are not limited to: Testing to validate the ability to respond to a crisis in a coordinated, timely, and effective manner, by simulating the occurrence of a specific crisis. 13.4.2 Technical Testing The primary objective of the technical test is to ensure the communication processes and data storage and recovery processes can function at an alternate site to perform the functions and capabilities of the system within the designated requirements. Technical testing shall include, but is not limited to: Process from backup system at the alternate site; Restore system using backups; and Switch compute and storage resources to alternate processing site. 13.5 Disaster Recovery Procedures 13.5.1 Notification and Activation Phase This phase addresses the initial actions taken to detect and assess damage inflicted by a disruption to Praktice AI. Based on the assessment of the Event, sometimes according to the Praktice AI Incident Response Policy, the Contingency Plan may be activated by either the Security Officer or the COO. The notification sequence is listed below: * The first responder is to notify the COO. All known information must be relayed to the COO. * The COO is to contact the Technology Team and inform them of the event. The COO is to to begin assessment procedures. * The COO is to notify team members and direct them to complete the assessment procedures outlined below to determine the extent of damage and estimated recovery time. If damage assessment cannot be performed locally because of unsafe conditions, the COO is to initiate the following the steps below. * Damage Assessment Procedures: * The COO is to logically assess damage, gain insight into whether the infrastructure is salvageable, and begin to formulate a plan for recovery. * Alternate Assessment Procedures: * Upon notification, the COO is to follow the procedures for damage assessment with the Technology Team. * The Praktice AI Contingency Plan is to be activated if one or more of the following criteria are met: * Praktice AI will be unavailable for more than 48 hours. * Hosting facility is damaged and will be unavailable for more than 24 hours. * Other criteria, as appropriate and as defined by Praktice AI. * If the plan is to be activated, the COO is to notify and inform team members of the details of the event and if relocation is required. * Upon notification from the COO, group leaders and managers are to notify their respective teams. Team members are to be informed of all applicable information and prepared to respond and relocate if necessary. * The COO is to notify the hosting facility partners that a contingency event has been declared and to ship the necessary materials (as determined by damage assessment) to the alternate site. * The COO is to notify remaining personnel and executive leadership on the general status of the incident. * Notification can be message, email, or phone. 13.5.2 Recovery Phase This section provides procedures for recovering the application at an alternate site, whereas other efforts are directed to repair damage to the original system and capabilities. The following procedures are for recovering the Praktice AI infrastructure at the alternate site. Procedures are outlined per team as required. Each procedure should be executed in the sequence it is presented to maintain efficient operations. Recovery Goal: The goal is to rebuild Praktice AI infrastructure to a production state. The tasks outlines below are not sequential and some can be run in parallel. Contact Partners and Customers affected Assess damage to the environment Determine where to rebuild and begin replication of new environment using automated and tested scripts. Test new environment using pre-written tests Test logging, security, and alerting functionality Assure systems are appropriately patched and up to date Deploy environment to production Update DNS to new environment 13.5.3 Reconstitution Phase This section discusses activities necessary for restoring Praktice AI operations at the original or new site. The goal is to restore full operations within 24 hours of a disaster or outage. When the hosted data center at the original or new site has been restored, Praktice AI operations at the alternate site may be transitioned back. The goal is to provide a seamless transition of operations from the alternate site to the computer center. Original or New Site Restoration Begin replication of new environment using automated and tested scripts Test new environment using pre-written tests Test logging, security, and alerting functionality Deploy environment to production Assure systems are appropriately patched and up to date Update DNS to new environment Plan Deactivation If the Praktice AI environment is moved back to the original site from the alternative site, all hardware used at the alternate site should be handled and disposed of according to the Praktice AI Media Disposal Policy.

Breach Policy

To provide guidance for breach notification when impressive or unauthorized access, acquisition, use and/or disclosure of the ePHI occurs. Breach notification will be carried out in compliance with the American Recovery and Reinvestment Act (ARRA)/Health Information Technology for Economic and Clinical Health Act (HITECH) as well as any other federal or state notification law. The Federal Trade Commission (FTC) has published breach notification rules for vendors of personal health records as required by ARRA/HITECH. The FTC rule applies to entities not covered by HIPAA, primarily vendors of personal health records. The rule is effective September 24, 2009 with full compliance required by February 22, 2010. The American Recovery and Reinvestment Act of 2009 (ARRA) was signed into law on February 17, 2009. Title XIII of ARRA is the Health Information Technology for Economic and Clinical Health Act (HITECH). HITECH significantly impacts the Health Insurance Portability and Accountability (HIPAA) Privacy and Security Rules. While HIPAA did not require notification when patient protected health information (PHI) was inappropriately disclosed, covered entities and business associates may have chosen to include notification as part of the mitigation process. HITECH does require notification of certain breaches of unsecured PHI to the following: individuals, Department of Health and Human Services (HHS), and the media. The effective implementation for this provision is September 23, 2009 (pending publication HHS regulations). In the case of a breach, Praktice AI shall notify all affected Customers. It is the responsibility of the Customers to notify affected individuals. 12.1 Applicable Standards 12.1.1 Applicable Standards from the HITRUST Common Security Framework 11.a Reporting Information Security Events 11.c Responsibilities and Procedures 12.1.2 Applicable Standards from the HIPAA Security Rule Security Incident Procedures - 164.308(a)(6)(i) HITECH Notification in the Case of Breach - 13402(a) and 13402(b) HITECH Timeliness of Notification - 13402(d)(1) HITECH Content of Notification - 13402(f)(1) 12.2 Praktice AI Breach Policy Discovery of Breach: A breach of ePHI shall be treated as “discovered” as of the first day on which such breach is known to the organization, or, by exercising reasonable diligence would have been known to Praktice AI (includes breaches by the organization’s Customers, Partners, or subcontractors). Praktice AI shall be deemed to have knowledge of a breach if such breach is known or by exercising reasonable diligence would have been known, to any person, other than the person committing the breach, who is a workforce member or Partner of the organization. Following the discovery of a potential breach, the organization shall begin an investigation (see organizational policies for security incident response and/or risk management incident response) immediately, conduct a risk assessment, and based on the results of the risk assessment, begin the process to notify each Customer affected by the breach. Praktice AI shall also begin the process of determining what external notifications are required or should be made (e.g., Secretary of Department of Health & Human Services (HHS), media outlets, law enforcement officials, etc.) Breach Investigation: The Praktice AI Security Officer shall name an individual to act as the investigator of the breach (e.g., privacy officer, security officer, risk manager, etc.). The investigator shall be responsible for the management of the breach investigation, completion of a risk assessment, and coordinating with others in the organization as appropriate (e.g., administration, security incident response team, human resources, risk management, public relations, legal counsel, etc.) The investigator shall be the key facilitator for all breach notification processes to the appropriate entities (e.g., HHS, media, law enforcement officials, etc.). All documentation related to the breach investigation, including the risk assessment, shall be retained for a minimum of six years. A template breach log is located here. Risk Assessment: For an acquisition, access, use or disclosure of ePHI to constitute a breach, it must constitute a violation of the HIPAA Privacy Rule. A use or disclosure of ePHI that is incident to an otherwise permissible use or disclosure and occurs despite reasonable safeguards and proper minimum necessary procedures would not be a violation of the Privacy Rule and would not qualify as a potential breach. To determine if an impermissible use or disclosure of ePHI constitutes a breach and requires further notification, the organization will need to perform a risk assessment to determine if there is significant risk of harm to the individual as a result of the impermissible use or disclosure. The organization shall document the risk assessment as part of the investigation in the incident report form noting the outcome of the risk assessment process. The organization has the burden of proof for demonstrating that all notifications to appropriate Customers or that the use or disclosure did not constitute a breach. Based on the outcome of the risk assessment, the organization will determine the need to move forward with breach notification. The risk assessment and the supporting documentation shall be fact specific and address: Consideration of who impermissibly used or to whom the information was impermissibly disclosed; The type and amount of ePHI involved; The cause of the breach, and the entity responsible for the breach, either Customer, Praktice AI, or Partner. The potential for significant risk of financial, reputational, or other harm. Timeliness of Notification: Upon discovery of a breach, notice shall be made to the affected Praktice AI Customers no later than 4 hours after the discovery of the breach. It is the responsibility of the organization to demonstrate that all notifications were made as required, including evidence demonstrating the necessity of delay. Delay of Notification Authorized for Law Enforcement Purposes: If a law enforcement official states to the organization that a notification, notice, or posting would impede a criminal investigation or cause damage to national security, the organization shall: If the statement is in writing and specifies the time for which a delay is required, delay such notification, notice, or posting of the timer period specified by the official; or If the statement is made orally, document the statement, including the identify of the official making the statement, and delay the notification, notice, or posting temporarily and no longer than 30 days from the date of the oral statement, unless a written statement as described above is submitted during that time. Content of the Notice: The notice shall be written in plain language and must contain the following information: A brief description of what happened, including the date of the breach and the date of the discovery of the breach, if known; A description of the types of unsecured protected health information that were involved in the breach (such as whether full name, Social Security number, date of birth, home address, account number, diagnosis, disability code or other types of information were involved), if known; Any steps the Customer should take to protect Customer data from potential harm resulting from the breach. A brief description of what Praktice AI is doing to investigate the breach, to mitigate harm to individuals and Customers, and to protect against further breaches. Contact procedures for individuals to ask questions or learn additional information, which may include a toll-free telephone number, an e-mail address, a web site, or postal address. Methods of Notification: Praktice AI Customers will be notified via email and phone within the timeframe for reporting breaches, as outlined above. Maintenance of Breach Information/Log: As described above and in addition to the reports created for each incident, Praktice AI shall maintain a process to record or log all breaches of unsecured ePHI regardless of the number of records and Customers affected. The following information should be collected/logged for each breach (see sample Breach Notification Log): A description of what happened, including the date of the breach, the date of the discovery of the breach, and the number of records and Customers affected, if known. A description of the types of unsecured protected health information that were involved in the breach (such as full name, Social Security number, date of birth, home address, account number, etc.), if known. A description of the action taken with regard to notification of patients regarding the breach. Resolution steps taken to mitigate the breach and prevent future occurrences. Workforce Training: Praktice AI shall train all members of its workforce on the policies and procedures with respect to ePHI as necessary and appropriate for the members to carry out their job responsibilities. Workforce members shall also be trained as to how to identify and report breaches within the organization. Complaints: Praktice AI must provide a process for individuals to make complaints concerning the organization’s patient privacy policies and procedures or its compliance with such policies and procedures. Sanctions: The organization shall have in place and apply appropriate sanctions against members of its workforce, Customers, and Partners who fail to comply with privacy policies and procedures. Retaliation/Waiver: Praktice AI may not intimidate, threaten, coerce, discriminate against, or take other retaliatory action against any individual for the exercise by the individual of any privacy right. The organization may not require individuals to waive their privacy rights under as a condition of the provision of treatment, payment, enrollment in a health plan, or eligibility for benefits. 12.3 Praktice AI Platform Customer Responsibilities The Praktice AI Customer that accesses, maintains, retains, modifies, records, stores, destroys, or otherwise holds, uses, or discloses unsecured ePHI shall, without unreasonable delay and in no case later than 60 calendar days after discovery of a breach, notify Praktice AI of such breach. The Customer shall provide Praktice AI with the following information: A description of what happened, including the date of the breach, the date of the discovery of the breach, and the number of records and Customers affected, if known. A description of the types of unsecured protected health information that were involved in the breach (such as full name, Social Security number, date of birth, home address, account number, etc.), if known. A description of the action taken with regard to notification of patients regarding the breach. Resolution steps taken to mitigate the breach and prevent future occurrences. Notice to Media: Praktice AI Customers are responsible for providing notice to prominent media outlets at the Customer’s discretion. Notice to Secretary of HHS: Praktice AI Customers are responsible for providing notice to the Secretary of HHS at the Customer’s discretion. 12.4 Sample Letter to Customers in Case of Breach [Date] [Name] [Name of Customer] [Address 1] [Address 2] [City, State Zip Code] Dear [Name of Customer]: I am writing to you from Praktice AI.com, Inc., with important information about a recent breach that affects your account with us. We became aware of this breach on [Insert Date] which occurred on or about [Insert Date]. The breach occurred as follows: Describe event and include the following information: A brief description of what happened, including the date of the breach and the date of the discovery of the breach, if known. A description of the types of unsecured protected health information that were involved in the breach (such as whether full name, Social Security number, date of birth, home address, account number, diagnosis, disability code or other types of information were involved), if known. Any steps the Customer should take to protect themselves from potential harm resulting from the breach. A brief description of what Praktice AI is doing to investigate the breach, to mitigate harm to individuals, and to protect against further breaches. Contact procedures for individuals to ask questions or learn additional information, which includes a toll-free telephone number, an e-mail address, web site, or postal address. Other Optional Considerations: Recommendations to assist customer in remedying the breach. We will assist you in remedying the situation. Sincerely, Srinath Akula CEO - Praktice Inc. [at] Praktice AI

Incident Response Policy

Praktice AI implements an information security incident response process to consistently detect, respond, and report incidents, minimize loss and destruction, mitigate the weaknesses that were exploited, and restore information system functionality and business continuity as soon as possible. The incident response process addresses: Continuous monitoring of threats through intrusion detection systems (IDS) and other monitoring applications; Establishment of an information security incident response team; Establishment of procedures to respond to media inquiries; Establishment of clear procedures for identifying, responding, assessing, analyzing, and follow-up of information security incidents; Workforce training, education, and awareness on information security incidents and required responses; and Facilitation of clear communication of information security incidents with internal, as well as external, stakeholders Note: These policies were adapted from work by the HIPAA Collaborative of Wisconsin Security Networking Group. Refer to the linked document for additional copyright information. 11.1 Applicable Standards 11.1.1 Applicable Standards from the HITRUST Common Security Framework 11.a - Reporting Information Security Events 11.c - Responsibilities and Procedures 11.1.2 Applicable Standards from the HIPAA Security Rule 164.308(a)(5)(i) - Security Awareness and Training 164.308(a)(6) - Security Incident Procedures 11.2 Incident Management Policies The Praktice AI incident response process follows the process recommended by SANS, an industry leader in security. Process flows are a direct representation of the SANS process which can be found in this document. Praktice AI’s incident response classifies security-related events into the following categories: Events - Any observable computer security-related occurrence in a system or network with a negative consequence. Examples: Hardware component failing causing service outages. Software error causing service outages. General network or system instability. Precursors - A sign that an incident may occur in the future. Examples: Monitoring system showing unusual behavior. Audit log alerts indicated several failed login attempts. Suspicious emails targeting specific Praktice AI staff members with administrative access to production systems. Indications - A sign that an incident may have occurred or may be occurring at the present time. Examples: IDS alerts for modified system files or unusual system accesses. Antivirus alerts for infected files. Excessive network traffic directed at unexpected geographic locations. Incidents - A violation of computer security policies or acceptable use policies, often resulting in data breaches. Examples: Unauthorized disclosure of ePHI. Unauthorized change or destruction of ePHI. A data breach accomplished by an internal or external entity. A Denial-of-Service (DoS) attack causing a critical service to become unreachable. Praktice AI employees must report any unauthorized or suspicious activity seen on production systems or associated with related communication systems (such as email or Slack). In practice this means keeping an eye out for security events, and letting the Security Officer know about any observed precursors or indications as soon as they are discovered. 11.2.1 Identification Phase Immediately upon observation Praktice AI members report suspected and known Events, Precursors, Indications, and Incidents in one of the following ways: Direct report to management, the Security Officer, Privacy Officer, or other; Email; Phone call; Secure Chat; Anonymously through workforce members desired channels. The individual receiving the report facilitates completion of an Incident Identification form and notifies the Security Officer (if not already done). The Security Officer determines if the issue is an Event, Precursor, Indication, or Incident. If the issue is an event, indication, or precursor the Security Officer forwards it to the appropriate resource for resolution. Non-Technical Event (minor infringement): the Security Officer completes a SIR Form and investigates the incident. Technical Event: Assign the issue to an IT resource for resolution. This resource may also be a contractor or outsourced technical resource, in the event of a small office or lack of expertise in the area. If the issue is a security incident the Security Officer activates the Security Incident Response Team (SIRT) and notifies senior management. If a non-technical security incident is discovered the SIRT completes the investigation, implements preventative measures, and resolves the security incident. Once the investigation is completed, progress to Phase V, Follow-up. If the issue is a technical security incident, commence to Phase II: Containment. The Containment, Eradication, and Recovery Phases are highly technical. It is important to have them completed by a highly qualified technical security resource with oversight by the SIRT team. Each individual on the SIRT and the technical security resource document all measures taken during each phase, including the start and end times of all efforts. The lead member of the SIRT team facilitates initiation of a SIR Form or an Incident Survey Form. The intent of the SIR form is to provide a summary of all events, efforts, and conclusions of each Phase of this policy and procedures. The Security Officer, Privacy Officer, or Praktice AI representative appointed notifies any affected Customers and Partners. If no Customers and Partners are affected, notification is at the discretion of the Security and Privacy Officer. In the case of a threat identified, the Security Officer is to form a team to investigate and involve necessary resources, both internal to Praktice AI and potentially external. 11.2.2 Containment Phase (Technical) In this Phase, Praktice AI’s IT department attempts to contain the security incident. It is extremely important to take detailed notes during the security incident response process. This provides that the evidence gathered during the security incident can be used successfully during prosecution, if appropriate. The SIRT reviews any information that has been collected by the Security Officer or any other individual investigating the security incident. The SIRT secures the network perimeter. The IT department performs the following: Securely connect to the affected system over a trusted connection. Retrieve any volatile data from the affected system. Determine the relative integrity and the appropriateness of backing the system up. If appropriate, back up the system. Change the password(s) to the affected system(s). Determine whether it is safe to continue operations with the affect system(s). If it is safe, allow the system to continue to function; Complete any documentation relative to the security incident on the SIR Form. Move to Phase V, Follow-up. If it is NOT safe to allow the system to continue operations, discontinue the system(s) operation and move to Phase III, Eradication. The individual completing this phase provides written communication to the SIRT. Continuously apprise Senior Management of progress. Continue to notify affected Customers and Partners with relevant updates as needed 11.2.3 Eradication Phase (Technical) The Eradication Phase represents the SIRT’s effort to remove the cause, and the resulting security exposures, that are now on the affected system(s). Determine symptoms and cause related to the affected system(s). Strengthen the defenses surrounding the affected system(s), where possible (a risk assessment may be needed and can be determined by the Security Officer). This may include the following: An increase in network perimeter defenses. An increase in system monitoring defenses. Remediation (“fixing”) any security issues within the affected system, such as removing unused services/general host hardening techniques. Conduct a detailed vulnerability assessment to verify all the holes/gaps that can be exploited have been addressed. If additional issues or symptoms are identified, take appropriate preventative measures to eliminate or minimize potential future compromises. Complete the Eradication Form. Update the documentation with the information learned from the vulnerability assessment, including the cause, symptoms, and the method used to fix the problem with the affected system(s). Apprise Senior Management of the progress. Continue to notify affected Customers and Partners with relevant updates as needed. Move to Phase IV, Recovery. 11.2.4 Recovery Phase (Technical) The Recovery Phase represents the SIRT’s effort to restore the affected system(s) back to operation after the resulting security exposures, if any, have been corrected. The technical team determines if the affected system(s) have been changed in any way. If they have, the technical team restores the system to its proper, intended functioning (“last known good”). Once restored, the team validates that the system functions the way it was intended/had functioned in the past. This may require the involvement of the business unit that owns the affected system(s). If operation of the system(s) had been interrupted (i.e., the system(s) had been taken offline or dropped from the network while triaged), restart the restored and validated system(s) and monitor for behavior. If the system had not been changed in any way, but was taken offline (i.e., operations had been interrupted), restart the system and monitor for proper behavior. Update the documentation with the detail that was determined during this phase. Apprise Senior Management of progress. Continue to notify affected Customers and Partners with relevant updates as needed. Move to Phase V, Follow-up. 11.2.5 Follow-up Phase (Technical and Non-Technical) The Follow-up Phase represents the review of the security incident to look for “lessons learned” and to determine whether the process that was taken could have been improved in any way. It is recommended all security incidents be reviewed shortly after resolution to determine where response could be improved. Timeframes may extend to one to two weeks post-incident. Responders to the security incident (SIRT Team and technical security resource) meet to review the documentation collected during the security incident. Create a “lessons learned” document and attach it to the completed SIR Form. Evaluate the cost and impact of the security incident to Praktice AI using the documents provided by the SIRT and the technical security resource. Determine what could be improved. Communicate these findings to Senior Management for approval and for implementation of any recommendations made post-review of the security incident. Carry out recommendations approved by Senior Management; sufficient budget, time and resources should be committed to this activity. Close the security incident. 11.2.6 Periodic Evaluation It is important to note that the processes surrounding security incident response should be periodically reviewed and evaluated for effectiveness. This also involves appropriate training of resources expected to respond to security incidents, as well as the training of the general population regarding the Praktice AI’s expectation for them, relative to security responsibilities. The incident response plan is tested annually. 11.3 Security Incident Response Team (SIRT) Current members of the Praktice AI SIRT: Security Officer Privacy Officer

Facility Access Policy

Praktice AI works with Subcontractors to assure restriction of physical access to systems used as part of the Praktice AI Platform. Praktice AI and its Subcontractors control access to the physical buildings/facilities that house these systems/applications, or in which Praktice AI workforce members operate, in accordance to the HIPAA Security Rule 164.310 and its implementation specifications. Physical Access to all of Praktice AI facilities is limited to only those authorized in this policy. In an effort to safeguard ePHi from unauthorized access, tampering, and theft, access is allowed to areas only to those persons authorized to be in them and with escorts for unauthorized persons. All workforce members are responsible for reporting an incident of unauthorized visitor and/or unauthorized access to Praktice AI’s facility. Of note, Praktice AI does not physically house any systems used by its Platform in Praktice AI facilities. Physical security of our Platform servers is outlined in §1.3. 10.1 Applicable Standards 10.1.1 Applicable Standards from the HITRUST Common Security Framework 08.b - Physical Entry Controls 08.d - Protecting Against External and Environmental Threats 08.j - Equipment Maintenance 08.l - Secure Disposal or Re-Use of Equipment 09.p - Disposal of Media 10.1.2 Applicable Standards from the HIPAA Security Rule 164.310(a)(2)(ii) Facility Security Plan 164.310(a)(2)(iii) Access Control & Validation Procedures 164.310(b-c) Workstation Use & Security 10.2 Praktice AI-controlled Facility Access Policies Visitor and third party support access is recorded and supervised. All visitors are escorted. Repairs are documented and the documentation is retained. Fire extinguishers and detectors are installed according to applicable laws and regulations. Maintenance is controlled and conducted by authorized personnel in accordance with supplier-recommended intervals, insurance policies and the organizations maintenance program. Electronic and physical media containing covered information is securely destroyed (or the information securely removed) prior to disposal. The organization securely disposes media with sensitive information. Physical access is restricted using locks. Restricted areas and facilities are locked and when unattended (where feasible). Only authorized workforce members receive access to restricted areas (as determined by the Security Officer). Access and keys are revoked upon termination of workforce members. Workforce members must report a lost and/or stolen key(s) to the Security Officer. Enforcement of Facility Access Policies Report violations of this policy to the restricted area’s department team leader, supervisor, manager, or director, or the Privacy Officer. Workforce members in violation of this policy are subject to disciplinary action, up to and including termination. Visitors in violation of this policy are subject to loss of vendor privileges and/or termination of services from Praktice AI. Workstation Security Workstations may only be accessed and utilized by authorized workforce members to complete assigned job/contract responsibilities. All workforce members are required to monitor workstations and report unauthorized users and/or unauthorized attempts to access systems/applications as per the System Access Policy. All workstations purchased by Praktice AI are the property of Praktice AI and are distributed to users by the company.