{"id":34522,"date":"2026-04-17T09:01:07","date_gmt":"2026-04-17T09:01:07","guid":{"rendered":"https:\/\/pickbydoc.com\/?p=34522"},"modified":"2026-04-17T09:01:13","modified_gmt":"2026-04-17T09:01:13","slug":"your-new-therapist-chatty-leaky-and-hardly-human","status":"publish","type":"post","link":"https:\/\/pickbydoc.com\/?p=34522","title":{"rendered":"Your New Therapist: Chatty, Leaky, and Hardly Human"},"content":{"rendered":"<p> <br \/>\n<\/p>\n<div>\n<p class=\"has-black-color has-text-color has-background has-link-color wp-elements-ecd2afccee69739925d14fa5a17198a0\" style=\"background-color:#d9d9d9;padding-top:var(--wp--preset--spacing--50)\"><em><em>If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide &amp; Crisis Lifeline by dialing or texting \u201c988.\u201d<\/em><\/em><\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<p>Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to \u201cshady\u201d ones, they offer \u201csomeone that I could share more secrets with than my therapist.\u201d<\/p>\n<p>He especially likes the apps for feedback and support, even though sometimes they berate him or lead him to fight with his ex-wife. \u201cI feel more inclined to share more,\u201d Lahey said. \u201cI don\u2019t care about their perception of me.\u201d<\/p>\n<p>There are a lot of people like Lahey.<\/p>\n<p>Demand for mental health care has grown. Self-reported poor mental health days rose by 25% since the 1990s, <a href=\"https:\/\/www.sciencedirect.com\/science\/article\/pii\/S2950004423000135\" target=\"_blank\" rel=\"noopener\">found one study<\/a> analyzing survey data. According to the Centers for Disease Control and Prevention, suicide rates in 2022 <a href=\"https:\/\/www.cdc.gov\/nchs\/products\/databriefs\/db509.htm\" target=\"_blank\" rel=\"noopener\">matched a 2018 high<\/a> that hadn\u2019t been seen in nearly 80 years.<\/p>\n<p>There are many patients who find a nonhuman therapist, powered by artificial intelligence, highly appealing \u2014 more appealing than a human with a reclining couch and stern manner. <a href=\"https:\/\/www.tiktok.com\/@sarahhh11124\/video\/7514413851983498526?q=ai%20therapist&amp;t=1769094900540\" target=\"_blank\" rel=\"noopener\">Social media is replete<\/a> with <a href=\"https:\/\/www.tiktok.com\/@honesthourpolly\/video\/7546022813321153847?q=AI%20therapist&amp;t=1760971140947\" target=\"_blank\" rel=\"noopener\">videos<\/a> begging for a therapist who\u2019s \u201cnot on the clock,\u201d who\u2019s less judgmental, or who\u2019s just less expensive.<\/p>\n<p>Most people who need care don\u2019t get it, said Tom Insel, former head of the National Institute of Mental Health, citing his former agency\u2019s research. Of those who do, 40% receive \u201cminimally acceptable care.\u201d<\/p>\n<p>\u201cThere\u2019s a massive need for high-quality therapy,\u201d he said. \u201cWe\u2019re in a world in which the status quo is really crappy, to use a scientific term.\u201d<\/p>\n<p>Insel said engineers from OpenAI told him last fall that about 5% to 10% of the company\u2019s then-roughly 800 million-strong user base rely on ChatGPT for mental health support.<\/p>\n<p>Polling suggests these AI chatbots may be even more popular among young adults. A KFF poll found about 3 in 10 respondents ages 18 to 29 <a href=\"https:\/\/www.kff.org\/public-opinion\/kff-tracking-poll-on-health-information-and-trust-use-of-ai-for-health-information-and-advice\/\" target=\"_blank\" rel=\"noopener\">turned to AI chatbots<\/a> for mental or emotional health advice in the past year. Uninsured adults were about twice as likely as insured adults to report using AI tools. And nearly 60% of adult respondents who used a chatbot for mental health didn\u2019t follow up with a flesh-and-blood professional.<\/p>\n<div class=\"wp-block block--newsletter  \" data-type=\"kaiser-health-news\/newsletter\" data-align=\"center\" style=\"\">\n<h4 class=\"newsletter__title\">\n\t\t<a href=\"https:\/\/kffhealthnews.org\/email\/\" target=\"_blank\" rel=\"noopener\"><br \/>\n\t\t\tEmail Sign-Up\t\t<\/a><br \/>\n\t<\/h4>\n<p class=\"newsletter__description\">\n\t\tSubscribe to KFF Health News&#8217; free Morning Briefing.\t<\/p>\n<\/div>\n<p style=\"font-size:25px\"><strong>The App Will Put You on the Couch<\/strong><\/p>\n<p>A burgeoning industry of apps offers AI therapists with human-like, often unrealistically attractive avatars serving as a sounding board for those experiencing anxiety, depression, and other conditions.<\/p>\n<p>KFF Health News identified some 45 AI therapy apps in Apple\u2019s App Store in March. While many charge steep prices for their services \u2014 one listed an annual plan for $690 \u2014 they\u2019re still generally cheaper than talk therapy, which can cost hundreds of dollars an hour without insurance coverage.<\/p>\n<p>On the App Store, \u201ctherapy\u201d is often used as a marketing term, with small print noting the apps cannot diagnose or treat disease. One app, branded as OhSofia! AI Therapy Chat, had downloads in the six figures, said OhSofia! founder Anton Ilin in December.<\/p>\n<p>\u201cPeople are looking for therapy,\u201d Ilin said. On one hand, the product\u2019s name <a href=\"https:\/\/apps.apple.com\/us\/app\/ohsofia-ai-therapy-chat\/id6444262874\" target=\"_blank\" rel=\"noopener\">promises \u201ctherapy chat\u201d<\/a>; on the other, it warns in <a href=\"https:\/\/ohsofia.com\/privacy\" target=\"_blank\" rel=\"noopener\">its privacy policy<\/a> that it \u201cdoes not provide medical advice, diagnosis, treatment, or crisis intervention and is not a substitute for professional healthcare services.\u201d Executives don\u2019t think that\u2019s confusing, since there are disclaimers in the app.<\/p>\n<p>The apps promise big results without backup. <a href=\"https:\/\/apps.apple.com\/ph\/app\/ai-therapist-chatbot\/id6749176021\" target=\"_blank\" rel=\"noopener\">One promises<\/a> its users \u201cimmediate help during panic attacks.\u201d <a href=\"https:\/\/apps.apple.com\/us\/app\/ai-therapist-stress-anxiety\/id6569245569\" target=\"_blank\" rel=\"noopener\">Another claims<\/a> it was \u201cproven effective by researchers\u201d and that it offers 2.3 times faster relief for anxiety and stress. (It doesn\u2019t say what it\u2019s faster than.)<\/p>\n<p>There are few legislative or regulatory guardrails around how developers refer to their products \u2014 or even whether the products are safe or effective, said Vaile Wright, senior director of the office of health care innovation at the American Psychological Association. Even federal patient privacy protections don\u2019t apply, she said.<\/p>\n<p>\u201cTherapy is not a legally protected term,\u201d Wright said. \u201cSo, basically, anybody can say that they give therapy.\u201d<\/p>\n<p>Many of the apps \u201coverrepresent themselves,\u201d said John Torous, a psychiatrist and clinical informaticist at Beth Israel Deaconess Medical Center. \u201cDeceiving people that they have received treatment when they really have not has many negative consequences,\u201d including delaying actual care, he said.<\/p>\n<p>States such as Nevada, Illinois, and California are trying to sort out the regulatory disarray, enacting laws forbidding apps from describing their chatbots as AI therapists.<\/p>\n<p>\u201cIt\u2019s a profession. People go to school. They get licensed to do it,\u201d said Jovan Jackson, a Nevada legislator, who co-authored an enacted bill banning apps from referring to themselves as mental health professionals.<\/p>\n<p>Underlying the hype, outside researchers and company representatives themselves have told the FDA and Congress that there\u2019s little evidence supporting the efficacy of these products. What studies there are <a href=\"https:\/\/ojs.aaai.org\/index.php\/AIES\/article\/view\/36632\/38770\" target=\"_blank\" rel=\"noopener\">give<\/a> <a href=\"https:\/\/papers.ssrn.com\/sol3\/papers.cfm?abstract_id=5718163\" target=\"_blank\" rel=\"noopener\">contradictory answers<\/a> \u2014 and some <a href=\"https:\/\/jamanetwork.com\/journals\/jamanetworkopen\/fullarticle\/2840495\" target=\"_blank\" rel=\"noopener\">research suggests<\/a> companion-focused chatbots are \u201cconsistently poor\u201d at managing crises.<\/p>\n<p>\u201cWhen it comes to chatbots, we don\u2019t have any good evidence it works,\u201d said Charlotte Blease, a professor at Sweden\u2019s Uppsala University who specializes in trial design for digital health products.<\/p>\n<p>The lack of \u201cgood quality\u201d clinical trials stems from the FDA\u2019s failure to provide recommendations about how to test the products, she said. \u201cFDA is offering no rigorous advice on what the standards should be.\u201d<\/p>\n<p>Department of Health and Human Services spokesperson Emily Hilliard said, in response, that \u201cpatient safety is the FDA\u2019s highest priority\u201d and that AI-based products are subject to agency regulations requiring the demonstration of \u201creasonable assurance of safety and effectiveness before they can be marketed in the U.S.\u201d<\/p>\n<p style=\"font-size:25px\"><strong>The Silver-Tongued Apps<\/strong><\/p>\n<p>Preston Roche, a psychiatry resident who\u2019s <a href=\"https:\/\/www.tiktok.com\/discover\/preston-roche-psych\" target=\"_blank\" rel=\"noopener\">active on social media<\/a>, gets lots of questions about whether AI is a good therapist. After trying ChatGPT himself, he said he was \u201cimpressed\u201d initially that it was able to use <a href=\"https:\/\/www.mayoclinic.org\/tests-procedures\/cognitive-behavioral-therapy\/about\/pac-20384610\" target=\"_blank\" rel=\"noopener\">cognitive behavioral therapy<\/a> techniques to help him put negative thoughts \u201con trial.\u201d<\/p>\n<p>But Roche said after seeing posts on social media discussing people developing psychosis or being encouraged to make harmful decisions, he became disillusioned. The bots, he concluded, are sycophantic.<\/p>\n<p>\u201cWhen I look globally at the responsibilities of a therapist, it just completely fell on its face,\u201d he said.<\/p>\n<p>This sycophancy \u2014 the tendency of apps based on large language models to empathize, flatter, or delude their human conversation partner \u2014 is inherent to the app design, experts in digital health say.<\/p>\n<p>\u201cThe models were developed to answer a question or prompt that you ask and to give you what you\u2019re looking for,\u201d said Insel, the former NIMH director, \u201cand they\u2019re really good at basically affirming what you feel and providing psychological support, like a good friend.\u201d<\/p>\n<p>That\u2019s not what a good therapist does, though. \u201cThe point of psychotherapy is mostly to make you address the things that you have been avoiding,\u201d he said.<\/p>\n<p>While polling suggests many users are satisfied with what they\u2019re getting out of ChatGPT and other apps, there have been <a href=\"https:\/\/www.theguardian.com\/society\/2026\/mar\/31\/teenager-asked-chatgpt-most-successful-ways-take-life-inquest-told\" target=\"_blank\" rel=\"noopener\">high-profile reports<\/a> about the service <a href=\"https:\/\/www.bbc.com\/news\/articles\/cp3x71pv1qno\" target=\"_blank\" rel=\"noopener\">providing advice<\/a> or encouragement to self-harm.<\/p>\n<p>And <a href=\"https:\/\/www.hbsslaw.com\/press\/openai-chatgpt-wrongful-death-claim\/lawsuit-filed-against-openai-following-murder-suicide-in-connecticut?utm_source=chatgpt.com\" target=\"_blank\" rel=\"noopener\">at least<\/a> <a href=\"https:\/\/socialmediavictims.org\/press-releases\/smvlc-tech-justice-law-project-lawsuits-accuse-chatgpt-of-emotional-manipulation-supercharging-ai-delusions-and-acting-as-a-suicide-coach\/\" target=\"_blank\" rel=\"noopener\">one<\/a> <a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2025\/12\/First-County-Bank-v-OpenAI-Complaint-12-11-25.pdf\" target=\"_blank\" rel=\"noopener\">dozen<\/a> <a href=\"https:\/\/www.documentcloud.org\/documents\/26078522-raine-vs-openai-complaint\/\" target=\"_blank\" rel=\"noopener\">lawsuits<\/a> <a href=\"https:\/\/www.documentcloud.org\/documents\/27170664-decruise-v-opeenai\/\" target=\"_blank\" rel=\"noopener\">alleging<\/a> <a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2026\/01\/Gray-v-OpenAI-Complaint.pdf\" target=\"_blank\" rel=\"noopener\">wrongful death<\/a> or <a href=\"https:\/\/futurism.com\/artificial-intelligence\/mental-illness-chatgpt-psychosis-lawsuit\" target=\"_blank\" rel=\"noopener\">serious harm<\/a> have been filed against OpenAI after ChatGPT users died by suicide or became hospitalized. In most of those cases, the plaintiffs allege they began using the apps for one purpose \u2014 like schoolwork \u2014 before confiding in them. These cases are being <a href=\"https:\/\/techjusticelaw.org\/wp-content\/uploads\/2026\/03\/OAI_JCCP_Order_re_Petition_for_Coordination_-_5431.pdf\" target=\"_blank\" rel=\"noopener\">consolidated into a class-action lawsuit<\/a>.<\/p>\n<p>Google and the startup Character.ai \u2014 which has been funded by Google and has created \u201cavatars\u201d that adopt specific personas, like athletes, celebrities, study buddies, or therapists \u2014 are settling other wrongful-death lawsuits, <a href=\"https:\/\/www.axios.com\/2026\/01\/07\/google-character-ai-lawsuits-teen-suicides\" target=\"_blank\" rel=\"noopener\">according to<\/a> <a href=\"https:\/\/www.wsj.com\/tech\/ai\/ai-chatbot-startup-google-to-settle-lawsuits-over-teen-suicides-fb41a063\" target=\"_blank\" rel=\"noopener\">media reports<\/a>.<\/p>\n<p>OpenAI\u2019s CEO, Sam Altman, has said up to <a href=\"https:\/\/www.theguardian.com\/technology\/2025\/sep\/11\/chatgpt-may-start-alerting-authorities-about-youngsters-considering-suicide-says-ceo-sam-altman\" target=\"_blank\" rel=\"noopener\">1,500 people a week<\/a> may talk about suicide on ChatGPT.<\/p>\n<p>\u201cWe have seen a problem where people that are in fragile psychiatric situations using a model like 4o can get into a worse one,\u201d Altman said in a public question-and-answer session reported by <a href=\"https:\/\/www.wsj.com\/tech\/ai\/openai-sam-altman-google-code-red-c3a312ad?gaa_at=eafs&amp;gaa_n=AWEtsqdKuZo3d3IQxw-bo5JPMZe4qV-f2QOj595vvkc5eeNwrOf66TBYlrjIU3a6Mpc%3D&amp;gaa_ts=69cbd8f0&amp;gaa_sig=uFTm6LelYNfXdKRQB8KisN0-kLOPIwDQFJoRxo9xhXXkz_SzrkZ0jxp2CpiAgkdDC_PxNeDYsH35hEXblgqhbg%3D%3D\" target=\"_blank\" rel=\"noopener\">The Wall Street Journal<\/a>, referring to a particular model of ChatGPT introduced in 2024. \u201cI don\u2019t think this is the last time we\u2019ll face challenges like this with a model.\u201d<\/p>\n<p>An OpenAI spokesperson did not respond to requests for comment.<\/p>\n<p>The company has said it <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" target=\"_blank\" rel=\"noopener\">works with mental health experts<\/a> on safeguards, such as referring users to 988, the national suicide hotline. However, the lawsuits against OpenAI argue existing safeguards aren\u2019t good enough, and some research shows the problems are <a href=\"https:\/\/counterhate.com\/wp-content\/uploads\/2025\/10\/ChatGPT-The-Illusion-of-AI-Safety_FINAL_Oct25.pdf\" target=\"_blank\" rel=\"noopener\">worsening over time<\/a>. OpenAI <a href=\"https:\/\/openai.com\/index\/strengthening-chatgpt-responses-in-sensitive-conversations\/\" target=\"_blank\" rel=\"noopener\">has published<\/a> its own data suggesting the opposite.<\/p>\n<p>OpenAI is <a href=\"https:\/\/cdn.arstechnica.net\/wp-content\/uploads\/2025\/11\/Raine-v-OpenAI-Answer-11-25-25.pdf\" target=\"_blank\" rel=\"noopener\">defending itself in court<\/a>, offering, early in one case, a variety of defenses ranging from denying that its product caused self-harm to alleging that the defendant misused the product by inducing it to discuss suicide. It has also said it\u2019s working to <a href=\"https:\/\/openai.com\/index\/mental-health-litigation-approach\/\" target=\"_blank\" rel=\"noopener\">improve its safety features<\/a>.<\/p>\n<p>Smaller apps also rely on OpenAI or other AI models to power their products, executives told KFF Health News. In interviews, startup founders and other experts said they worry that if a company simply imports those models into its own service, it might duplicate whatever safety flaws exist in the original product.<\/p>\n<p style=\"font-size:25px\"><strong>Data Risks<\/strong><\/p>\n<p>KFF Health News\u2019 review of the App Store found listed age protections are minimal: Fifteen of the nearly four dozen apps say they could be downloaded by 4-year-old users; an additional 11 say they could be downloaded by those 12 and up.<\/p>\n<p>Privacy standards are opaque. On the App Store, several apps are described as neither tracking personally identifiable data nor sharing it with advertisers \u2014 but on their company websites, privacy policies contained contrary descriptions, discussing the use of such data and their disclosure of information to advertisers, like AdMob.<\/p>\n<p>In response to a request for comment, Apple spokesperson Adam Dema <a href=\"https:\/\/support.apple.com\/en-us\/102399\" target=\"_blank\" rel=\"noopener\">sent<\/a> <a href=\"https:\/\/developer.apple.com\/app-store\/review\/guidelines\/#data-use-and-sharing\" target=\"_blank\" rel=\"noopener\">links<\/a> to the company\u2019s App Store policies, which bar apps from using health data for advertising and require them to display information about how they use data in general. Dema did not respond to a request for further comment about how Apple enforces these policies.<\/p>\n<p>Researchers and policy advocates said that sharing psychiatric data with social media firms means patients could be profiled. They could be targeted by dodgy treatment firms or charged different prices for goods based on their health.<\/p>\n<p>KFF Health News contacted several app makers about these discrepancies; two that responded said their privacy policies had been put together in error and pledged to change them to reflect their stances against advertising. (A third, the team at OhSofia!, said simply that they don\u2019t do advertising, though their app\u2019s <a href=\"https:\/\/ohsofia.com\/privacy\" target=\"_blank\" rel=\"noopener\">privacy policy<\/a> notes users \u201cmay opt out of marketing communications.\u201d)<\/p>\n<p>One executive told KFF Health News there\u2019s business pressure to maintain access to the data.<\/p>\n<p>\u201cMy general feeling is a subscription model is much, much better than any sort of advertising,\u201d said Tim Rubin, the founder of Wellness AI, adding that he\u2019d change the description in his app\u2019s privacy policy.<\/p>\n<p>One investor advised him not to swear off advertising, he said. \u201cThey\u2019re like, essentially, that\u2019s the most valuable thing about having an app like this, that data.\u201d<\/p>\n<p>\u201cI think we\u2019re still at the beginning of what\u2019s going to be a revolution in how people seek psychological support and, even in some cases, therapy,\u201d Insel said. \u201cAnd my concern is that there\u2019s just no framework for any of this.\u201d<\/p>\n<aside class=\"meta-authors meta\">\n<p>\n\t\t\t\t\t\t\t\t\t\t\t<span class=\"author-name\">Darius Tahir: <\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kffhealthnews.org\/news\/article\/ai-chatbots-therapy-big-risks-few-regulations\/mailto:DariusT@kff.org\" target=\"_blank\" rel=\"noopener\">DariusT@kff.org<\/a>,\t\t\t\t\t\t\t\t\t\t\t<a href=\"http:\/\/twitter.com\/dariustahir\" target=\"_blank\" rel=\"noopener\"><br \/>\n\t\t\t\t\t\t\t@dariustahir<\/a>\n\t\t\t\t\t\t\t\t\t<\/p>\n<p>\n\t\t\t\t\t\t\t\t\t\t\t<span class=\"author-name\">Oona Zenda: <\/span><br \/>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<a href=\"https:\/\/kffhealthnews.org\/news\/article\/ai-chatbots-therapy-big-risks-few-regulations\/mailto:ozenda@kff.org\" target=\"_blank\" rel=\"noopener\">ozenda@kff.org<\/a>\t\t\t\t\t\t\t\t\t<\/p>\n<\/aside>\n<section class=\"block--category-tag-list \">\n<div class=\"category-tag-list__content-wrapper\">\n<h3 class=\"block--category-tag-list__title\">\n\t\t\t\tRelated Topics\t\t\t<\/h3>\n<p>\t\t\t\t<a class=\"category-tag-list__contact-link\" href=\"https:\/\/kffhealthnews.org\/contact-us\/\" target=\"_blank\" rel=\"noopener\"><br \/>\n\t\t\tContact Us\t\t<\/a><br \/>\n\t\t<a class=\"category-tag-list__tip-link\" href=\"https:\/\/kffhealthnews.org\/tips\/\" target=\"_blank\" rel=\"noopener\"><br \/>\n\t\t\tSubmit a Story Tip\t\t<\/a>\n\t<\/div>\n<\/section><\/div>\n<p><script async src=\"\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script><script async src=\"\/\/www.tiktok.com\/embed.js\"><\/script><br \/>\n<br \/><br \/>\n<br \/><a href=\"https:\/\/kffhealthnews.org\/news\/article\/ai-chatbots-therapy-big-risks-few-regulations\/\" target=\"_blank\" rel=\"noopener\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>If you or someone you know may be experiencing a mental health crisis, contact the 988 Suicide &amp; Crisis Lifeline by dialing or texting \u201c988.\u201d Vince Lahey of Carefree, Arizona, embraces chatbots. From Big Tech products to \u201cshady\u201d ones, they offer \u201csomeone that I could share more secrets with than my therapist.\u201d He especially likes [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":34523,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":true,"jetpack_social_options":{"image_generator_settings":{"template":"highway","default_image_id":0,"font":"","enabled":false},"version":2}},"categories":[171],"tags":[],"class_list":["post-34522","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-health-conditions"],"jetpack_publicize_connections":[],"_links":{"self":[{"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/posts\/34522","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=34522"}],"version-history":[{"count":1,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/posts\/34522\/revisions"}],"predecessor-version":[{"id":34524,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/posts\/34522\/revisions\/34524"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=\/wp\/v2\/media\/34523"}],"wp:attachment":[{"href":"https:\/\/pickbydoc.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=34522"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=34522"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/pickbydoc.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=34522"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}