译科技|关于ChatGPT和生成式AI,值得关注这11个安全趋势!

数据观  •  扫码分享
我是创始人李岩:很抱歉!给自己产品做个广告,点击进来看看。  

PWC highlights 11 ChatGPT and generative AI security trends to watch in 2023

Are ChatGPT and generative AI a blessing or a curse for security teams? While artificial intelligence (AI)’s ability to generate malicious code and phishing emails presents new challenges for organizations, it’s also opened the door to a range of defensive use cases, from threat detection and remediation guidance, to securing Kubernetes and cloud environments.

   ChatGPT和生成式 AI对安全团队来说是福是祸?虽然AI具备生成恶意代码和网络钓鱼电子邮件的功能带来了新的挑战,但它也为威胁检测、修复指导、Kubernetes防护、云环境等一系列防御应用打开了一扇大门。

Recently, VentureBeat reached out to some of PWC’s top analysts, who shared their thoughts on how generative AI and tools like ChatGPT will impact the threat landscape and what use cases will emerge for defenders.

   近日,VentureBeat连线普华永道的部分顶级分析师。受访者就生成式AI、ChatGPT等工具如何影响安全格局以及会出现哪些防御案例分享了他们的看法。

Overall, the analysts were optimistic that defensive use cases will rise to combat malicious uses of AI over the long term. Predictions on how generative AI will impact cybersecurity in the future include:

  总体而言,分析师们乐观认为:从长远来看,防御性应用将会激增,以打击AI的恶意使用。而生成式AI未来对网络安全的影响,他们预测了以下11种趋势发展:

Malicious AI usage

恶意AI的使用

The need to protect AI training and output

保护AI训练和输出的必要性

Setting generative AI usage policies

制定生成式AI使用规范

Modernizing security auditing

安全审计的现代化

Greater focus on data hygiene and assessing bias

更加关注数据卫生和评估偏倚

Keeping up with expanding risks and mastering the basics

紧跟不断扩大的风险并掌握基础知识

Creating new jobs and responsibilities

创造更多新工作和职能

Leveraging AI to optimize cyber investments

借力AI优化网络投资

Enhancing threat intelligence

增强威胁智能感知

Threat prevention and managing compliance risk

威胁防范和管理合规风险

Implementing a digital trust strategy

实施数字信任战略

Below is an edited transcript of their responses.

以下是整理出来的普华永道顶级分析师们的回复记录。

1.Malicious AI usage

   1.恶意AI的使用

We are at an inflection point when it comes to the way in which we can leverage AI, and this paradigm shift impacts everyone and everything. When AI is in the hands of citizens and consumers, great things can happen.

   我们正处于一个拐点上,今天的AI范式巨变影响着每一个人和每一件事。当AI掌握在公民和消费者手中时,它就是人类社会的福祉。

At the same time, it can be used by malicious threat actors for nefarious purposes, such as malware and sophisticated phishing emails.

   与此同时,AI也可以被不法分子用于恶意目的,例如恶意软件和复杂的网络钓鱼电子邮件。

Given the many unknowns about AI’s future capabilities and potential, it’s critical that organizations develop strong processes to build up resilience against cyberattacks.

   鉴于AI未来的能力和潜力存在着一众未知数,组织开发强大的流程以建立对网络攻击的复原力是至关重要的。

There’s also a need for regulation underpinned by societal values that stipulates this technology be used ethically. In the meantime, we need to become smart users of this tool, and consider what safeguards are needed in order for AI to provide maximum value while minimizing risks.

   还需要以社会价值观为基础的监管,以保证该项技术的使用符合道德规范。另外,我们自身也需要成为明智用户,斟酌出让AI提供最大价值的保障措施,同时将风险最低化。

Sean Joyce, global cybersecurity and privacy leader, U.S. cyber, risk and regulatory leader, PwC U.S.

   普华永道美国全球网络安全和隐私负责人,美国网络、风险和监管负责人Sean Joyce

2. The need to protect AI training and output

   2. 保护AI训练和输出的必要性

Now that generative AI has reached a point where it can help companies transform their business, it’s important for leaders to work with firms with deep understanding of how to navigate the growing security and privacy considerations.

   既然生成式AI已经到了可以帮助公司业务转型的地步,对于领导者来说,重要的是与懂得如何驾驭日益增长的AI安全和隐私问题的企业合作。

The reason is twofold. First, companies must protect how they train the AI as the unique knowledge they gain from fine-tuning the models will be critical in how they run their business, deliver better products and services, and engage with their employees, customers and ecosystem.

   原因有两个。首先,公司必须保障他们训练AI的方式,因为他们从微调模型中获得的独特知识,对于如何经营业务、提供更好的产品和服务,以及与员工、客户和生态系统的互动至关重要。

Second, companies must also protect the prompts and responses they get from a generative AI solution, as they reflect what the company’s customers and employees are doing with the technology.

   其次,公司还得保障他们从生成式 AI 解决方案中获得的提示和响应,因为它们反映了客户和员工使用该技术所做的事情。”

Mohamed Kande, vice chair — U.S. consulting solutions co-leader and global advisory leader, PwC U.S.

   普华永道美国副主席—— 美国咨询解决方案联席主管兼全球咨询主管 Mohamed Kande

3. Setting generative AI usage policies

   3. 制定生成式 AI 使用规范

Many of the interesting business use cases emerge when you consider that you can further train (fine-tune) generative AI models with your own content, documentation and assets so it can operate on the unique capabilities of your business, in your context. In this way, a business can extend generative AI in the ways they work with their unique IP and knowledge.

   当你使用自己的内容、文档和资产进一步训练(微调)生成式 AI 模型时,许多有趣的商业应用案例就会出现,以便它可以在你前后内容中以你的专属业务独特能力运行。通过这种方式,企业可以通过使用它独有的知识产权和知识来扩展生成式AI。

This is where security and privacy become important. For a business, the ways you prompt generative AI to generate content should be private for your business. Fortunately, most generative AI platforms have considered this from the start and are designed to enable the security and privacy of prompts, outputs and fine-tuning content.

   安全和隐私也因此变得至关重要。对于企业而言,提示生成式 AI 生成内容的方式应该是私有的。幸运的是,大多数生成式 AI 平台从一开始就考虑到了这一点,并旨在实现提示、输出和微调内容的安全和隐私。

However, now all users understand this. So, it is important for any business to set policies for the use of generative AI to avoid confidential and private data from going into public systems, and to establish safe and secure environments for generative AI within their business.

  但是,现在所有用户都明白这一点。因此,对于任何企业来说,制定生成式AI的使用规范,并在其业务过程中建立安全可靠的运行环境,以避免机密及私人数据进入公共系统尤为关键。

Bret Greenstein, partner, data, analytics and AI, PwC U.S.

   普华永道合伙人(数据、分析和人工智能)Bret Greenstein

4. Modernizing security auditing

   4. 安全审计现代化

Using generative AI to innovate the audit has amazing possibilities! Sophisticated generative AI has the ability to create responses that take into account certain situations while being written in simple, easy-to-understand language.

   使用生成式 AI 来创新审计工作具有意想不到的可能性。复杂的生成式AI能够用简单易懂的语言编写创建特定情况的响应。

What this technology offers is a single point to access information and guidance while also supporting document automation and analyzing data in response to specific queries — and it’s efficient. That’s a win-win.

   这项技术所提供的是一个获取信息和指导的单点,同时还支持文件自动化和分析数据以回应特定的查询--而且效率很高。这是一个双赢的结果。

It’s not hard to see how such a capability could provide a significantly better experience for our people. Plus, a better experience for our people provides a better experience for our clients, too.

  不难看出,这种能力为人民提供了更好的体验,同时也为员工、客户提供了更佳体验。

Kathryn Kaminsky, vice chair — U.S. trust solutions co-leader

   普华永道副主席,美国信托解决方案联合负责人Kathryn Kaminsky

5. Greater focus on data hygiene and assessing bias

  5.更加关注数据卫生和评估偏倚

Any data input into an AI system is at risk for potential theft or misuse. To start, identifying the appropriate data to input into the system will help reduce the risk of losing confidential and private information to an attack.

   输入AI系统的任何数据都有被盗或被滥用的风险。首先,确定输入系统的数据是否安全,将有助于降低因攻击而丢失机密信息的风险。

Additionally, it’s important to exercise proper data collection to develop detailed and targeted prompts that are fed into the system, so you can get more valuable outputs.

   此外,侧重进行适当的数据收集,以制定详细且有针对性的提示并输入系统,这样就可以获得更有价值的输出。

Once you have your outputs, review them with a fine-tooth comb for any inherent biases within the system. For this process, engage a diverse team of professionals to help assess any bias.

   一旦有了输出,仔细检查它们,看看系统中是否存在任何固有的偏差。在这个过程中,请教多元化的专业团队来评估可能的偏倚情况。

Unlike a coded or scripted solution, generative AI is based on models that are trained, and therefore the responses they provide are not 100% predictable. The most trusted output from generative AI requires collaboration between the tech behind the scenes and the people leveraging it.

   与编码或脚本解决方案不同,生成式AI是基于经过训练的模型,因此它们提供的响应不是100%可预测的。生成式AI最值得信任的输出结果,需要幕后技术人员与用户之间的协作。

Jacky Wagner, principal, cybersecurity, risk and regulatory, PwC U.S.

   普华永道美国网络安全、风险和监管负责人 Jacky Wagner

6. Keeping up with expanding risks and mastering the basics

6.紧跟扩大风险 掌握基础

Now that generative AI is reaching widescale adoption, implementing robust security measures is a must to protect against threat actors. The capabilities of this technology make it possible for cybercriminals to create deep fakes and execute malware and ransomware attacks more easily, and companies need to prepare for these challenges.

   如今生成式AI正在被广泛采用,实施强大的安全措施是防止威胁行为者的必要条件。这项技术的能力使网络犯罪分子能更轻易进行深度造假,甚或执行恶意软件和勒索软件攻击,企业需要为这些挑战做好准备。

The most effective cybermeasures continue to receive the least focus: By keeping up with basic cyberhygiene and condensing sprawling legacy systems, companies can reduce the attack surface for cybercriminals.

   最有效的网络措施往往是不起眼的基础工作,例如:通过保持基本的网络卫生和压缩庞大的遗留系统,企业可以大大减少网络犯罪分子的攻击面。

Consolidating operating environments can reduce costs, allowing companies to maximize efficiencies and focus on improving their cybersecurity measures.

  整合运营环境可以降低成本,使公司能够最大限度地提高效率并专注于改进其网络安全措施。

Joe Nocera, PwC partner leader, cyber, risk and regulatory marketing

   普华永道合伙人负责人(网络、风险和监管营销)Joe Nocera

7. Creating new jobs and responsibilities

  7. 创造更多新工作和职能

Overall, I’d suggest companies consider embracing generative AI instead of creating firewalls and resisting — but with the appropriate safeguards and risk mitigations in place. Generative AI has some really interesting potential for how work gets done; it can actually help to free up time for human analysis and creativity.

   总体而言,我建议企业在做好适当保障措施和风险缓解措施情况下,考虑采用生成式AI,而非创建防火墙和安全抵御。因为生成式AI确实在完成工作方面具有出色的潜力,实际上它可以帮助人类腾出时间进行分析和创造。

The emergence of generative AI could potentially lead to new jobs and responsibilities related to the technology itself — and creates a responsibility for making sure AI is being used ethically and responsibly.

   生成式AI的出现可能会带来与技术本身相关的新工作和职能职责——且形成一种在道德规范下合理使用人工智能的职能。

It also will require employees who utilize this information to develop a new skill — being able to assess and identify whether the content created is accurate.

   这样的员工还被要求利用这些信息来发展一项新技能——评估和识别创建的内容是否准确。

Much like how a calculator is used for doing simple math-related tasks, there are still many human skills that will need to be applied in the day-to-day use of generative AI, such as critical thinking and customization for purpose — in order to unlock the full power of generative AI.

  就像用计算器来完成简单数学任务一样,为了充分释放生成式AI的全部潜力,在它的日常使用中,仍然要应用到许多人类技能,例如批判性思维和目的性定制。

So, while on the surface it may seem to pose a threat in its ability to automate manual tasks, it can also unlock creativity and provide assistance, upskilling and treating opportunities to help people excel in their jobs.

   因此,虽然从表面上看,它的能力可能会对自动化手工任务构成威胁,但它也可以释放创造力,提供帮助,提高技能,抓住机会,让人们在工作中脱颖而出。

Julia Lamm, workforce strategy partner, PwC U.S.

   普华永道美国劳动力战略合伙人Julia Lamm

8. Leveraging AI to optimize cyber investments

   8.借力AI优化网络投资

Even amidst economic uncertainty, companies aren’t actively looking to reduce cybersecurity spend in 2023; however, CISOs must be economical with their investment decisions.

   2023年,即使在经济高度不确定的情况下,大多数企业也没有打算减少网络安全支出。然而,CISO正面临提高安全投资有效性和决策质量的压力。

They are facing pressure to do more with less, leading them to invest in technology that replaces overly manual risk prevention and mitigation processes with automated alternatives.

   这意味着他们得少花钱多办事,这导致CISO热衷于投资于用自动化替代方案取代过度依赖手动风险预防和缓解流程的技术。

While generative AI is not perfect, it is very fast, productive and consistent, with rapidly improving skills. By implementing the right risk technology — such as machine learning mechanisms designed for greater risk coverage and detection — organizations can save money, time and headcount, and are better able to navigate and withstand any uncertainty that lies ahead.

   虽然生成式 AI 并不完美,但它非常快速、高效且一致,并且技能会迅速提高。通过实施正确的风险技术,例如为更大的风险覆盖和检测而设计的机器学习机制,企业可以节省大量资金、时间和人员,并且能够更好地应对和抵御未来的任何不确定性。

Elizabeth McNichol, enterprise technology solutions leader, cyber, risk and regulatory, PwC U.S.

   美国普华永道企业技术解决方案负责人(网络、风险和监管)Elizabeth McNichol

9. Enhancing threat intelligence

   9. 增强威胁智能感知

While companies releasing generative AI capabilities are focused on protections to prevent the creation and distribution of malware, misinformation or disinformation, we need to assume generative AI will be used by bad actors for these purposes and stay ahead of these considerations.

  虽然开发生成式AI技术的公司一再声称他们致力于防止产品不被滥用于开发和传播恶意软件、错误或虚假信息,但是用户得未雨绸缪,为即将到来的威胁做足准备。

In 2023, we fully expect to see further enhancements in threat intelligence and other defensive capabilities to leverage generative AI for good. Generative AI will allow for radical advancements in efficiency and real-time trust decisions; for example, forming real-time conclusions on access to systems and information with a much higher level of confidence than currently deployed access and identity models.

   2023 年,我们有望看到生成式AI在帮助进一步增强威胁智能感知和其他防御能力。它将显着提高效率和实时信任决策,例如,以比当前部署的访问和身份模型更高的置信度,形成关于访问系统和信息的实时判断。

It is certain generative AI will have far-reaching implications on how every industry and company within that industry operates; PwC believes these collective advancements will continue to be human led and technology powered, with 2023 showing the most accelerated advancements that set the direction for the decades ahead.

   可以肯定的是,生成式AI将对网络安全行业的企业运作方式产生深远影响。普华永道认为,该行业的集体进步将继续以人为主导,技术为动力,2023年将是网络安全技术进化最快的一年,为未来几十年指明方向。

Matt Hobbs, Microsoft practice leader, PwC U.S.

   普华永道美国微软业务负责人Matt Hobbs

10. Threat prevention and managing compliance risk

   10. 威胁防范与合规风险管理

As the threat landscape continues to evolve, the health sector — an industry ripe with personal information — continues to find itself in threat actors’ crosshairs.

  随着安全威胁形势的不断严峻,卫生部门作为一个储存了海量个人信息的行业部门,始终是黑客攻击的热门靶子。

Health industry executives are increasing their cyber budgets and investing in automation technologies that can not only help prevent against cyberattacks, but also manage compliance risks, better protect patient and staff data, reduce healthcare costs, eliminate process inefficiencies and much more.

   卫生行业的高管们正在增加他们的网络预算并投资于自动化防御技术,这些技术不仅可以帮助防止网络攻击,还可以管理合规风险、更好地保护患者和员工数据,降低医疗成本,消除流程效率低下问题等。

As generative AI continues to evolve, so do associated risks and opportunities to secure healthcare systems, underscoring the importance for the health industry to embrace this new technology while simultaneously building up their cyberdefenses and resilience.

  随着生成式AI的不断发展,医疗保健系统安全的相关风险和机遇也在不断变化,这凸显了医疗行业在采用生成式人工智能新技术的同时,建立网络防御和弹性的重要性。

Tiffany Gallagher, health industries risk and regulatory leader, PwC U.S.

  普华永道美国健康行业风险和监管负责人Tiffany Gallagher

11. Implementing a digital trust strategy

   11.实施数字信任战略

The velocity of technological innovation, such as generative AI, combined with an evolving patchwork of regulation and erosion of trust in institutions requires a more strategic approach.

   技术创新的速度,就像生成式A,再加上不断变化的拼凑式监管和对机构信任的削弱,需要采取更加战略性的方法。

By pursuing a digital trust strategy, organizations can better harmonize across traditionally siloed functions such as cybersecurity, privacy and data governance in a way that allows them to anticipate risks while also unlocking value for the business.

  通过追求数字信任战略,企业可以更好地协调传统上孤立的功能,例如网络安全、隐私和数据治理,从而使他们能够预测风险,同时释放业务价值。

At its core, a digital trust framework identifies solutions above and beyond compliance — instead prioritizing the trust and value exchange between organizations and customers.

   其核心是,数字信任框架确定了超越合规性的解决方案——优先考虑企业和客户之间的信任和价值交换。

Toby Spry, principal, data risk and privacy, PwC U.S.

   普华永道美国数据风险和隐私负责人Toby Spry

责任编辑:张薇

随意打赏

提交建议
微信扫一扫,分享给好友吧。